00:00:00.001 Started by upstream project "autotest-per-patch" build number 132044 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.043 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.044 The recommended git tool is: git 00:00:00.044 using credential 00000000-0000-0000-0000-000000000002 00:00:00.046 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.069 Fetching changes from the remote Git repository 00:00:00.071 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.099 Using shallow fetch with depth 1 00:00:00.099 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.099 > git --version # timeout=10 00:00:00.125 > git --version # 'git version 2.39.2' 00:00:00.125 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.144 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.144 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.280 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.291 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.304 Checking out Revision 71582ff3be096f9d5ed302be37c05572278bd285 (FETCH_HEAD) 00:00:03.304 > git config core.sparsecheckout # timeout=10 00:00:03.317 > git read-tree -mu HEAD # timeout=10 00:00:03.334 > git checkout -f 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=5 00:00:03.353 Commit message: "jenkins/jjb-config: Add SPDK_TEST_NVME_INTERRUPT to nvme-phy job" 00:00:03.353 > git rev-list --no-walk 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=10 00:00:03.442 [Pipeline] Start of Pipeline 00:00:03.452 [Pipeline] library 00:00:03.453 Loading library shm_lib@master 00:00:03.453 Library shm_lib@master is cached. Copying from home. 00:00:03.465 [Pipeline] node 00:00:03.473 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:03.475 [Pipeline] { 00:00:03.482 [Pipeline] catchError 00:00:03.483 [Pipeline] { 00:00:03.491 [Pipeline] wrap 00:00:03.497 [Pipeline] { 00:00:03.502 [Pipeline] stage 00:00:03.504 [Pipeline] { (Prologue) 00:00:03.516 [Pipeline] echo 00:00:03.517 Node: VM-host-WFP1 00:00:03.521 [Pipeline] cleanWs 00:00:03.530 [WS-CLEANUP] Deleting project workspace... 00:00:03.530 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.537 [WS-CLEANUP] done 00:00:03.722 [Pipeline] setCustomBuildProperty 00:00:03.802 [Pipeline] httpRequest 00:00:04.198 [Pipeline] echo 00:00:04.200 Sorcerer 10.211.164.101 is alive 00:00:04.208 [Pipeline] retry 00:00:04.210 [Pipeline] { 00:00:04.222 [Pipeline] httpRequest 00:00:04.226 HttpMethod: GET 00:00:04.227 URL: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:04.227 Sending request to url: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:04.232 Response Code: HTTP/1.1 200 OK 00:00:04.233 Success: Status code 200 is in the accepted range: 200,404 00:00:04.234 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:12.103 [Pipeline] } 00:00:12.120 [Pipeline] // retry 00:00:12.127 [Pipeline] sh 00:00:12.410 + tar --no-same-owner -xf jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:12.426 [Pipeline] httpRequest 00:00:12.773 [Pipeline] echo 00:00:12.775 Sorcerer 10.211.164.101 is alive 00:00:12.784 [Pipeline] retry 00:00:12.787 [Pipeline] { 00:00:12.800 [Pipeline] httpRequest 00:00:12.804 HttpMethod: GET 00:00:12.805 URL: http://10.211.164.101/packages/spdk_61de1ff1769e5bf51fef5b571bd90d22754089f5.tar.gz 00:00:12.805 Sending request to url: http://10.211.164.101/packages/spdk_61de1ff1769e5bf51fef5b571bd90d22754089f5.tar.gz 00:00:12.817 Response Code: HTTP/1.1 200 OK 00:00:12.817 Success: Status code 200 is in the accepted range: 200,404 00:00:12.818 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_61de1ff1769e5bf51fef5b571bd90d22754089f5.tar.gz 00:01:49.269 [Pipeline] } 00:01:49.303 [Pipeline] // retry 00:01:49.312 [Pipeline] sh 00:01:49.613 + tar --no-same-owner -xf spdk_61de1ff1769e5bf51fef5b571bd90d22754089f5.tar.gz 00:01:52.155 [Pipeline] sh 00:01:52.455 + git -C spdk log --oneline -n5 00:01:52.455 61de1ff17 nvme/nvme: Factor out submit_request function 00:01:52.455 8171ee4bf accel/mlx5: Factor out task submissions 00:01:52.455 bf3b6da74 nvme/rdma: Remove qpair::max_recv_sge as unused 00:01:52.455 1ca833860 nvme/rdma: Add likely/unlikely to IO path 00:01:52.455 13fe09815 nvme/rdma: Factor our contig request preparation 00:01:52.473 [Pipeline] writeFile 00:01:52.487 [Pipeline] sh 00:01:52.772 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:52.784 [Pipeline] sh 00:01:53.066 + cat autorun-spdk.conf 00:01:53.066 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.066 SPDK_TEST_NVME=1 00:01:53.066 SPDK_TEST_FTL=1 00:01:53.066 SPDK_TEST_ISAL=1 00:01:53.066 SPDK_RUN_ASAN=1 00:01:53.066 SPDK_RUN_UBSAN=1 00:01:53.066 SPDK_TEST_XNVME=1 00:01:53.066 SPDK_TEST_NVME_FDP=1 00:01:53.066 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.073 RUN_NIGHTLY=0 00:01:53.075 [Pipeline] } 00:01:53.089 [Pipeline] // stage 00:01:53.104 [Pipeline] stage 00:01:53.106 [Pipeline] { (Run VM) 00:01:53.118 [Pipeline] sh 00:01:53.400 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:53.400 + echo 'Start stage prepare_nvme.sh' 00:01:53.400 Start stage prepare_nvme.sh 00:01:53.400 + [[ -n 7 ]] 00:01:53.400 + disk_prefix=ex7 00:01:53.400 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:01:53.400 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:01:53.400 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:01:53.400 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.400 ++ SPDK_TEST_NVME=1 00:01:53.400 ++ SPDK_TEST_FTL=1 00:01:53.400 ++ SPDK_TEST_ISAL=1 00:01:53.400 ++ SPDK_RUN_ASAN=1 00:01:53.400 ++ SPDK_RUN_UBSAN=1 00:01:53.400 ++ SPDK_TEST_XNVME=1 00:01:53.400 ++ SPDK_TEST_NVME_FDP=1 00:01:53.400 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.400 ++ RUN_NIGHTLY=0 00:01:53.400 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:01:53.400 + nvme_files=() 00:01:53.400 + declare -A nvme_files 00:01:53.400 + backend_dir=/var/lib/libvirt/images/backends 00:01:53.400 + nvme_files['nvme.img']=5G 00:01:53.400 + nvme_files['nvme-cmb.img']=5G 00:01:53.400 + nvme_files['nvme-multi0.img']=4G 00:01:53.400 + nvme_files['nvme-multi1.img']=4G 00:01:53.400 + nvme_files['nvme-multi2.img']=4G 00:01:53.400 + nvme_files['nvme-openstack.img']=8G 00:01:53.400 + nvme_files['nvme-zns.img']=5G 00:01:53.400 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:53.400 + (( SPDK_TEST_FTL == 1 )) 00:01:53.400 + nvme_files["nvme-ftl.img"]=6G 00:01:53.400 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:53.400 + nvme_files["nvme-fdp.img"]=1G 00:01:53.400 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:53.400 + for nvme in "${!nvme_files[@]}" 00:01:53.400 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:53.400 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:53.400 + for nvme in "${!nvme_files[@]}" 00:01:53.400 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:01:54.338 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:54.338 + for nvme in "${!nvme_files[@]}" 00:01:54.338 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:54.338 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:54.338 + for nvme in "${!nvme_files[@]}" 00:01:54.338 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:54.338 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:54.338 + for nvme in "${!nvme_files[@]}" 00:01:54.338 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:54.338 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:54.338 + for nvme in "${!nvme_files[@]}" 00:01:54.338 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:54.597 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:54.597 + for nvme in "${!nvme_files[@]}" 00:01:54.597 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:54.856 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:54.856 + for nvme in "${!nvme_files[@]}" 00:01:54.856 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:01:54.856 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:54.856 + for nvme in "${!nvme_files[@]}" 00:01:54.856 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:55.424 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:55.424 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:55.424 + echo 'End stage prepare_nvme.sh' 00:01:55.424 End stage prepare_nvme.sh 00:01:55.435 [Pipeline] sh 00:01:55.717 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:55.717 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:55.717 00:01:55.717 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:01:55.717 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:01:55.717 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:01:55.717 HELP=0 00:01:55.717 DRY_RUN=0 00:01:55.717 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:01:55.717 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:55.717 NVME_AUTO_CREATE=0 00:01:55.717 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:01:55.717 NVME_CMB=,,,, 00:01:55.717 NVME_PMR=,,,, 00:01:55.717 NVME_ZNS=,,,, 00:01:55.717 NVME_MS=true,,,, 00:01:55.717 NVME_FDP=,,,on, 00:01:55.717 SPDK_VAGRANT_DISTRO=fedora39 00:01:55.717 SPDK_VAGRANT_VMCPU=10 00:01:55.717 SPDK_VAGRANT_VMRAM=12288 00:01:55.717 SPDK_VAGRANT_PROVIDER=libvirt 00:01:55.717 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:55.717 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:55.717 SPDK_OPENSTACK_NETWORK=0 00:01:55.717 VAGRANT_PACKAGE_BOX=0 00:01:55.717 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:55.717 FORCE_DISTRO=true 00:01:55.717 VAGRANT_BOX_VERSION= 00:01:55.717 EXTRA_VAGRANTFILES= 00:01:55.717 NIC_MODEL=e1000 00:01:55.717 00:01:55.717 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:01:55.717 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:01:58.272 Bringing machine 'default' up with 'libvirt' provider... 00:01:59.211 ==> default: Creating image (snapshot of base box volume). 00:01:59.470 ==> default: Creating domain with the following settings... 00:01:59.470 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730735777_ba4825363beb4f27d1c3 00:01:59.470 ==> default: -- Domain type: kvm 00:01:59.470 ==> default: -- Cpus: 10 00:01:59.470 ==> default: -- Feature: acpi 00:01:59.470 ==> default: -- Feature: apic 00:01:59.470 ==> default: -- Feature: pae 00:01:59.470 ==> default: -- Memory: 12288M 00:01:59.470 ==> default: -- Memory Backing: hugepages: 00:01:59.470 ==> default: -- Management MAC: 00:01:59.470 ==> default: -- Loader: 00:01:59.470 ==> default: -- Nvram: 00:01:59.470 ==> default: -- Base box: spdk/fedora39 00:01:59.470 ==> default: -- Storage pool: default 00:01:59.470 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730735777_ba4825363beb4f27d1c3.img (20G) 00:01:59.470 ==> default: -- Volume Cache: default 00:01:59.470 ==> default: -- Kernel: 00:01:59.470 ==> default: -- Initrd: 00:01:59.470 ==> default: -- Graphics Type: vnc 00:01:59.470 ==> default: -- Graphics Port: -1 00:01:59.470 ==> default: -- Graphics IP: 127.0.0.1 00:01:59.470 ==> default: -- Graphics Password: Not defined 00:01:59.470 ==> default: -- Video Type: cirrus 00:01:59.470 ==> default: -- Video VRAM: 9216 00:01:59.470 ==> default: -- Sound Type: 00:01:59.470 ==> default: -- Keymap: en-us 00:01:59.470 ==> default: -- TPM Path: 00:01:59.470 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:59.470 ==> default: -- Command line args: 00:01:59.470 ==> default: -> value=-device, 00:01:59.470 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:59.470 ==> default: -> value=-drive, 00:01:59.470 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:59.470 ==> default: -> value=-device, 00:01:59.470 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:59.470 ==> default: -> value=-device, 00:01:59.470 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:59.470 ==> default: -> value=-drive, 00:01:59.470 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:01:59.470 ==> default: -> value=-device, 00:01:59.470 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:59.470 ==> default: -> value=-device, 00:01:59.470 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:59.470 ==> default: -> value=-drive, 00:01:59.470 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:59.470 ==> default: -> value=-device, 00:01:59.470 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:59.470 ==> default: -> value=-drive, 00:01:59.470 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:59.470 ==> default: -> value=-device, 00:01:59.470 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:59.470 ==> default: -> value=-drive, 00:01:59.470 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:59.470 ==> default: -> value=-device, 00:01:59.470 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:59.470 ==> default: -> value=-device, 00:01:59.470 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:59.470 ==> default: -> value=-device, 00:01:59.470 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:59.470 ==> default: -> value=-drive, 00:01:59.470 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:59.470 ==> default: -> value=-device, 00:01:59.470 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:00.039 ==> default: Creating shared folders metadata... 00:02:00.039 ==> default: Starting domain. 00:02:01.985 ==> default: Waiting for domain to get an IP address... 00:02:16.881 ==> default: Waiting for SSH to become available... 00:02:18.258 ==> default: Configuring and enabling network interfaces... 00:02:23.533 default: SSH address: 192.168.121.197:22 00:02:23.533 default: SSH username: vagrant 00:02:23.533 default: SSH auth method: private key 00:02:26.828 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:35.025 ==> default: Mounting SSHFS shared folder... 00:02:37.559 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:37.559 ==> default: Checking Mount.. 00:02:38.936 ==> default: Folder Successfully Mounted! 00:02:38.936 ==> default: Running provisioner: file... 00:02:40.324 default: ~/.gitconfig => .gitconfig 00:02:40.583 00:02:40.583 SUCCESS! 00:02:40.583 00:02:40.583 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:02:40.583 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:40.583 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:02:40.583 00:02:40.592 [Pipeline] } 00:02:40.606 [Pipeline] // stage 00:02:40.615 [Pipeline] dir 00:02:40.616 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:02:40.617 [Pipeline] { 00:02:40.630 [Pipeline] catchError 00:02:40.632 [Pipeline] { 00:02:40.644 [Pipeline] sh 00:02:40.925 + vagrant ssh-config --host vagrant 00:02:40.925 + sed -ne /^Host/,$p 00:02:40.925 + tee ssh_conf 00:02:44.214 Host vagrant 00:02:44.214 HostName 192.168.121.197 00:02:44.214 User vagrant 00:02:44.214 Port 22 00:02:44.214 UserKnownHostsFile /dev/null 00:02:44.214 StrictHostKeyChecking no 00:02:44.214 PasswordAuthentication no 00:02:44.214 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:44.214 IdentitiesOnly yes 00:02:44.214 LogLevel FATAL 00:02:44.214 ForwardAgent yes 00:02:44.214 ForwardX11 yes 00:02:44.214 00:02:44.228 [Pipeline] withEnv 00:02:44.230 [Pipeline] { 00:02:44.244 [Pipeline] sh 00:02:44.526 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:44.526 source /etc/os-release 00:02:44.526 [[ -e /image.version ]] && img=$(< /image.version) 00:02:44.526 # Minimal, systemd-like check. 00:02:44.526 if [[ -e /.dockerenv ]]; then 00:02:44.526 # Clear garbage from the node's name: 00:02:44.526 # agt-er_autotest_547-896 -> autotest_547-896 00:02:44.526 # $HOSTNAME is the actual container id 00:02:44.526 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:44.526 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:44.526 # We can assume this is a mount from a host where container is running, 00:02:44.526 # so fetch its hostname to easily identify the target swarm worker. 00:02:44.526 container="$(< /etc/hostname) ($agent)" 00:02:44.526 else 00:02:44.526 # Fallback 00:02:44.526 container=$agent 00:02:44.527 fi 00:02:44.527 fi 00:02:44.527 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:44.527 00:02:44.798 [Pipeline] } 00:02:44.817 [Pipeline] // withEnv 00:02:44.826 [Pipeline] setCustomBuildProperty 00:02:44.843 [Pipeline] stage 00:02:44.845 [Pipeline] { (Tests) 00:02:44.866 [Pipeline] sh 00:02:45.148 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:45.432 [Pipeline] sh 00:02:45.738 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:46.015 [Pipeline] timeout 00:02:46.016 Timeout set to expire in 50 min 00:02:46.018 [Pipeline] { 00:02:46.032 [Pipeline] sh 00:02:46.314 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:46.883 HEAD is now at 61de1ff17 nvme/nvme: Factor out submit_request function 00:02:46.895 [Pipeline] sh 00:02:47.178 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:47.452 [Pipeline] sh 00:02:47.733 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:48.011 [Pipeline] sh 00:02:48.294 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:48.554 ++ readlink -f spdk_repo 00:02:48.554 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:48.554 + [[ -n /home/vagrant/spdk_repo ]] 00:02:48.554 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:48.554 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:48.554 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:48.554 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:48.554 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:48.554 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:48.554 + cd /home/vagrant/spdk_repo 00:02:48.554 + source /etc/os-release 00:02:48.554 ++ NAME='Fedora Linux' 00:02:48.554 ++ VERSION='39 (Cloud Edition)' 00:02:48.554 ++ ID=fedora 00:02:48.554 ++ VERSION_ID=39 00:02:48.554 ++ VERSION_CODENAME= 00:02:48.554 ++ PLATFORM_ID=platform:f39 00:02:48.554 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:48.554 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:48.554 ++ LOGO=fedora-logo-icon 00:02:48.554 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:48.554 ++ HOME_URL=https://fedoraproject.org/ 00:02:48.554 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:48.554 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:48.554 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:48.554 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:48.554 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:48.554 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:48.554 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:48.554 ++ SUPPORT_END=2024-11-12 00:02:48.554 ++ VARIANT='Cloud Edition' 00:02:48.554 ++ VARIANT_ID=cloud 00:02:48.554 + uname -a 00:02:48.554 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:48.554 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:49.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:49.424 Hugepages 00:02:49.424 node hugesize free / total 00:02:49.424 node0 1048576kB 0 / 0 00:02:49.424 node0 2048kB 0 / 0 00:02:49.424 00:02:49.424 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:49.424 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:49.424 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:49.424 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:49.424 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:49.424 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:49.424 + rm -f /tmp/spdk-ld-path 00:02:49.424 + source autorun-spdk.conf 00:02:49.424 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:49.424 ++ SPDK_TEST_NVME=1 00:02:49.424 ++ SPDK_TEST_FTL=1 00:02:49.424 ++ SPDK_TEST_ISAL=1 00:02:49.424 ++ SPDK_RUN_ASAN=1 00:02:49.424 ++ SPDK_RUN_UBSAN=1 00:02:49.424 ++ SPDK_TEST_XNVME=1 00:02:49.424 ++ SPDK_TEST_NVME_FDP=1 00:02:49.424 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:49.424 ++ RUN_NIGHTLY=0 00:02:49.424 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:49.424 + [[ -n '' ]] 00:02:49.424 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:49.684 + for M in /var/spdk/build-*-manifest.txt 00:02:49.684 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:49.684 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:49.684 + for M in /var/spdk/build-*-manifest.txt 00:02:49.684 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:49.684 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:49.684 + for M in /var/spdk/build-*-manifest.txt 00:02:49.684 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:49.684 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:49.684 ++ uname 00:02:49.684 + [[ Linux == \L\i\n\u\x ]] 00:02:49.684 + sudo dmesg -T 00:02:49.684 + sudo dmesg --clear 00:02:49.684 + dmesg_pid=5243 00:02:49.684 + sudo dmesg -Tw 00:02:49.684 + [[ Fedora Linux == FreeBSD ]] 00:02:49.684 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:49.684 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:49.684 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:49.684 + [[ -x /usr/src/fio-static/fio ]] 00:02:49.684 + export FIO_BIN=/usr/src/fio-static/fio 00:02:49.684 + FIO_BIN=/usr/src/fio-static/fio 00:02:49.684 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:49.684 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:49.684 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:49.684 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:49.684 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:49.684 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:49.684 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:49.684 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:49.684 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:49.944 15:57:08 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:49.944 15:57:08 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:49.944 15:57:08 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:49.944 15:57:08 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:49.944 15:57:08 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:49.944 15:57:08 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:49.944 15:57:08 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:49.944 15:57:08 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:49.944 15:57:08 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:49.944 15:57:08 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:49.944 15:57:08 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:49.944 15:57:08 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:49.944 15:57:08 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:49.944 15:57:08 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:49.944 15:57:08 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:49.944 15:57:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:49.944 15:57:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:49.944 15:57:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:49.944 15:57:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:49.944 15:57:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:49.944 15:57:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:49.944 15:57:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:49.944 15:57:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:49.944 15:57:08 -- paths/export.sh@5 -- $ export PATH 00:02:49.944 15:57:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:49.944 15:57:08 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:49.944 15:57:08 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:49.944 15:57:08 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730735828.XXXXXX 00:02:49.944 15:57:08 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730735828.P0346a 00:02:49.944 15:57:08 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:49.944 15:57:08 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:49.944 15:57:08 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:49.944 15:57:08 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:49.944 15:57:08 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:49.944 15:57:08 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:49.944 15:57:08 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:49.944 15:57:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:49.944 15:57:08 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:49.944 15:57:08 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:49.944 15:57:08 -- pm/common@17 -- $ local monitor 00:02:49.944 15:57:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:49.945 15:57:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:49.945 15:57:08 -- pm/common@25 -- $ sleep 1 00:02:49.945 15:57:08 -- pm/common@21 -- $ date +%s 00:02:49.945 15:57:08 -- pm/common@21 -- $ date +%s 00:02:49.945 15:57:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730735828 00:02:49.945 15:57:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730735828 00:02:49.945 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730735828_collect-vmstat.pm.log 00:02:49.945 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730735828_collect-cpu-load.pm.log 00:02:50.883 15:57:09 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:50.883 15:57:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:50.883 15:57:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:50.883 15:57:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:50.883 15:57:09 -- spdk/autobuild.sh@16 -- $ date -u 00:02:50.883 Mon Nov 4 03:57:09 PM UTC 2024 00:02:50.883 15:57:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:50.883 v25.01-pre-165-g61de1ff17 00:02:50.883 15:57:09 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:50.883 15:57:09 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:50.883 15:57:09 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:50.883 15:57:09 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:50.883 15:57:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.142 ************************************ 00:02:51.142 START TEST asan 00:02:51.142 ************************************ 00:02:51.142 using asan 00:02:51.142 15:57:09 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:51.142 00:02:51.142 real 0m0.000s 00:02:51.142 user 0m0.000s 00:02:51.142 sys 0m0.000s 00:02:51.142 15:57:09 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:51.142 15:57:09 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:51.142 ************************************ 00:02:51.142 END TEST asan 00:02:51.142 ************************************ 00:02:51.142 15:57:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:51.142 15:57:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:51.142 15:57:09 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:51.142 15:57:09 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:51.142 15:57:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.142 ************************************ 00:02:51.142 START TEST ubsan 00:02:51.142 ************************************ 00:02:51.142 using ubsan 00:02:51.142 15:57:09 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:51.142 00:02:51.142 real 0m0.000s 00:02:51.142 user 0m0.000s 00:02:51.142 sys 0m0.000s 00:02:51.142 15:57:09 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:51.142 ************************************ 00:02:51.142 END TEST ubsan 00:02:51.142 15:57:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:51.142 ************************************ 00:02:51.142 15:57:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:51.142 15:57:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:51.142 15:57:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:51.142 15:57:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:51.142 15:57:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:51.142 15:57:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:51.142 15:57:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:51.142 15:57:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:51.142 15:57:09 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:51.402 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:51.402 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:51.971 Using 'verbs' RDMA provider 00:03:07.799 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:25.895 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:25.895 Creating mk/config.mk...done. 00:03:25.895 Creating mk/cc.flags.mk...done. 00:03:25.895 Type 'make' to build. 00:03:25.895 15:57:42 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:25.895 15:57:42 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:25.895 15:57:42 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:25.895 15:57:42 -- common/autotest_common.sh@10 -- $ set +x 00:03:25.895 ************************************ 00:03:25.895 START TEST make 00:03:25.895 ************************************ 00:03:25.895 15:57:42 make -- common/autotest_common.sh@1127 -- $ make -j10 00:03:25.895 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:25.895 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:25.895 meson setup builddir \ 00:03:25.895 -Dwith-libaio=enabled \ 00:03:25.895 -Dwith-liburing=enabled \ 00:03:25.895 -Dwith-libvfn=disabled \ 00:03:25.895 -Dwith-spdk=disabled \ 00:03:25.895 -Dexamples=false \ 00:03:25.895 -Dtests=false \ 00:03:25.895 -Dtools=false && \ 00:03:25.895 meson compile -C builddir && \ 00:03:25.895 cd -) 00:03:25.895 make[1]: Nothing to be done for 'all'. 00:03:27.798 The Meson build system 00:03:27.798 Version: 1.5.0 00:03:27.798 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:27.798 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:27.798 Build type: native build 00:03:27.798 Project name: xnvme 00:03:27.798 Project version: 0.7.5 00:03:27.798 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:27.798 C linker for the host machine: cc ld.bfd 2.40-14 00:03:27.798 Host machine cpu family: x86_64 00:03:27.798 Host machine cpu: x86_64 00:03:27.798 Message: host_machine.system: linux 00:03:27.798 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:27.798 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:27.798 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:27.798 Run-time dependency threads found: YES 00:03:27.798 Has header "setupapi.h" : NO 00:03:27.798 Has header "linux/blkzoned.h" : YES 00:03:27.798 Has header "linux/blkzoned.h" : YES (cached) 00:03:27.798 Has header "libaio.h" : YES 00:03:27.798 Library aio found: YES 00:03:27.798 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:27.798 Run-time dependency liburing found: YES 2.2 00:03:27.798 Dependency libvfn skipped: feature with-libvfn disabled 00:03:27.798 Found CMake: /usr/bin/cmake (3.27.7) 00:03:27.798 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:27.798 Subproject spdk : skipped: feature with-spdk disabled 00:03:27.798 Run-time dependency appleframeworks found: NO (tried framework) 00:03:27.798 Run-time dependency appleframeworks found: NO (tried framework) 00:03:27.798 Library rt found: YES 00:03:27.798 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:27.798 Configuring xnvme_config.h using configuration 00:03:27.798 Configuring xnvme.spec using configuration 00:03:27.798 Run-time dependency bash-completion found: YES 2.11 00:03:27.798 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:27.798 Program cp found: YES (/usr/bin/cp) 00:03:27.798 Build targets in project: 3 00:03:27.798 00:03:27.798 xnvme 0.7.5 00:03:27.798 00:03:27.798 Subprojects 00:03:27.798 spdk : NO Feature 'with-spdk' disabled 00:03:27.798 00:03:27.798 User defined options 00:03:27.798 examples : false 00:03:27.798 tests : false 00:03:27.798 tools : false 00:03:27.798 with-libaio : enabled 00:03:27.798 with-liburing: enabled 00:03:27.798 with-libvfn : disabled 00:03:27.798 with-spdk : disabled 00:03:27.798 00:03:27.798 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:28.367 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:28.367 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:28.367 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:28.367 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:28.367 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:28.367 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:28.367 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:28.367 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:28.367 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:28.367 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:28.367 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:28.367 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:28.367 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:28.367 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:28.625 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:28.625 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:28.625 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:28.625 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:28.625 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:28.625 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:28.625 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:28.625 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:28.625 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:28.625 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:28.625 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:28.625 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:28.625 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:28.625 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:28.625 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:28.625 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:28.625 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:28.625 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:28.626 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:28.626 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:28.884 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:28.884 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:28.884 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:28.884 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:28.884 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:28.884 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:28.884 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:28.884 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:28.884 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:28.884 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:28.884 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:28.884 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:28.884 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:28.884 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:28.884 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:28.884 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:28.884 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:28.884 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:28.884 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:28.884 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:28.884 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:28.884 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:29.143 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:29.143 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:29.143 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:29.143 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:29.143 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:29.143 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:29.143 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:29.143 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:29.143 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:29.143 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:29.143 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:29.143 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:29.143 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:29.434 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:29.434 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:29.434 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:29.434 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:29.434 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:29.692 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:29.692 [75/76] Linking static target lib/libxnvme.a 00:03:29.692 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:29.692 INFO: autodetecting backend as ninja 00:03:29.692 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:29.692 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:37.978 The Meson build system 00:03:37.978 Version: 1.5.0 00:03:37.978 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:37.978 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:37.978 Build type: native build 00:03:37.978 Program cat found: YES (/usr/bin/cat) 00:03:37.978 Project name: DPDK 00:03:37.978 Project version: 24.03.0 00:03:37.978 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:37.978 C linker for the host machine: cc ld.bfd 2.40-14 00:03:37.978 Host machine cpu family: x86_64 00:03:37.978 Host machine cpu: x86_64 00:03:37.978 Message: ## Building in Developer Mode ## 00:03:37.978 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:37.978 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:37.978 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:37.978 Program python3 found: YES (/usr/bin/python3) 00:03:37.978 Program cat found: YES (/usr/bin/cat) 00:03:37.978 Compiler for C supports arguments -march=native: YES 00:03:37.978 Checking for size of "void *" : 8 00:03:37.978 Checking for size of "void *" : 8 (cached) 00:03:37.978 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:37.978 Library m found: YES 00:03:37.978 Library numa found: YES 00:03:37.978 Has header "numaif.h" : YES 00:03:37.978 Library fdt found: NO 00:03:37.978 Library execinfo found: NO 00:03:37.978 Has header "execinfo.h" : YES 00:03:37.978 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:37.978 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:37.978 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:37.978 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:37.978 Run-time dependency openssl found: YES 3.1.1 00:03:37.978 Run-time dependency libpcap found: YES 1.10.4 00:03:37.978 Has header "pcap.h" with dependency libpcap: YES 00:03:37.978 Compiler for C supports arguments -Wcast-qual: YES 00:03:37.978 Compiler for C supports arguments -Wdeprecated: YES 00:03:37.978 Compiler for C supports arguments -Wformat: YES 00:03:37.978 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:37.978 Compiler for C supports arguments -Wformat-security: NO 00:03:37.978 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:37.978 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:37.978 Compiler for C supports arguments -Wnested-externs: YES 00:03:37.978 Compiler for C supports arguments -Wold-style-definition: YES 00:03:37.978 Compiler for C supports arguments -Wpointer-arith: YES 00:03:37.978 Compiler for C supports arguments -Wsign-compare: YES 00:03:37.978 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:37.978 Compiler for C supports arguments -Wundef: YES 00:03:37.978 Compiler for C supports arguments -Wwrite-strings: YES 00:03:37.978 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:37.978 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:37.978 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:37.978 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:37.978 Program objdump found: YES (/usr/bin/objdump) 00:03:37.978 Compiler for C supports arguments -mavx512f: YES 00:03:37.978 Checking if "AVX512 checking" compiles: YES 00:03:37.978 Fetching value of define "__SSE4_2__" : 1 00:03:37.978 Fetching value of define "__AES__" : 1 00:03:37.978 Fetching value of define "__AVX__" : 1 00:03:37.978 Fetching value of define "__AVX2__" : 1 00:03:37.978 Fetching value of define "__AVX512BW__" : 1 00:03:37.978 Fetching value of define "__AVX512CD__" : 1 00:03:37.978 Fetching value of define "__AVX512DQ__" : 1 00:03:37.978 Fetching value of define "__AVX512F__" : 1 00:03:37.978 Fetching value of define "__AVX512VL__" : 1 00:03:37.978 Fetching value of define "__PCLMUL__" : 1 00:03:37.978 Fetching value of define "__RDRND__" : 1 00:03:37.978 Fetching value of define "__RDSEED__" : 1 00:03:37.978 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:37.978 Fetching value of define "__znver1__" : (undefined) 00:03:37.978 Fetching value of define "__znver2__" : (undefined) 00:03:37.979 Fetching value of define "__znver3__" : (undefined) 00:03:37.979 Fetching value of define "__znver4__" : (undefined) 00:03:37.979 Library asan found: YES 00:03:37.979 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:37.979 Message: lib/log: Defining dependency "log" 00:03:37.979 Message: lib/kvargs: Defining dependency "kvargs" 00:03:37.979 Message: lib/telemetry: Defining dependency "telemetry" 00:03:37.979 Library rt found: YES 00:03:37.979 Checking for function "getentropy" : NO 00:03:37.979 Message: lib/eal: Defining dependency "eal" 00:03:37.979 Message: lib/ring: Defining dependency "ring" 00:03:37.979 Message: lib/rcu: Defining dependency "rcu" 00:03:37.979 Message: lib/mempool: Defining dependency "mempool" 00:03:37.979 Message: lib/mbuf: Defining dependency "mbuf" 00:03:37.979 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:37.979 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:37.979 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:37.979 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:37.979 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:37.979 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:37.979 Compiler for C supports arguments -mpclmul: YES 00:03:37.979 Compiler for C supports arguments -maes: YES 00:03:37.979 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:37.979 Compiler for C supports arguments -mavx512bw: YES 00:03:37.979 Compiler for C supports arguments -mavx512dq: YES 00:03:37.979 Compiler for C supports arguments -mavx512vl: YES 00:03:37.979 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:37.979 Compiler for C supports arguments -mavx2: YES 00:03:37.979 Compiler for C supports arguments -mavx: YES 00:03:37.979 Message: lib/net: Defining dependency "net" 00:03:37.979 Message: lib/meter: Defining dependency "meter" 00:03:37.979 Message: lib/ethdev: Defining dependency "ethdev" 00:03:37.979 Message: lib/pci: Defining dependency "pci" 00:03:37.979 Message: lib/cmdline: Defining dependency "cmdline" 00:03:37.979 Message: lib/hash: Defining dependency "hash" 00:03:37.979 Message: lib/timer: Defining dependency "timer" 00:03:37.979 Message: lib/compressdev: Defining dependency "compressdev" 00:03:37.979 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:37.979 Message: lib/dmadev: Defining dependency "dmadev" 00:03:37.979 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:37.979 Message: lib/power: Defining dependency "power" 00:03:37.979 Message: lib/reorder: Defining dependency "reorder" 00:03:37.979 Message: lib/security: Defining dependency "security" 00:03:37.979 Has header "linux/userfaultfd.h" : YES 00:03:37.979 Has header "linux/vduse.h" : YES 00:03:37.979 Message: lib/vhost: Defining dependency "vhost" 00:03:37.979 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:37.979 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:37.979 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:37.979 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:37.979 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:37.979 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:37.979 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:37.979 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:37.979 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:37.979 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:37.979 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:37.979 Configuring doxy-api-html.conf using configuration 00:03:37.979 Configuring doxy-api-man.conf using configuration 00:03:37.979 Program mandb found: YES (/usr/bin/mandb) 00:03:37.979 Program sphinx-build found: NO 00:03:37.979 Configuring rte_build_config.h using configuration 00:03:37.979 Message: 00:03:37.979 ================= 00:03:37.979 Applications Enabled 00:03:37.979 ================= 00:03:37.979 00:03:37.979 apps: 00:03:37.979 00:03:37.979 00:03:37.979 Message: 00:03:37.979 ================= 00:03:37.979 Libraries Enabled 00:03:37.979 ================= 00:03:37.979 00:03:37.979 libs: 00:03:37.979 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:37.979 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:37.979 cryptodev, dmadev, power, reorder, security, vhost, 00:03:37.979 00:03:37.979 Message: 00:03:37.979 =============== 00:03:37.979 Drivers Enabled 00:03:37.979 =============== 00:03:37.979 00:03:37.979 common: 00:03:37.979 00:03:37.979 bus: 00:03:37.979 pci, vdev, 00:03:37.979 mempool: 00:03:37.979 ring, 00:03:37.979 dma: 00:03:37.979 00:03:37.979 net: 00:03:37.979 00:03:37.979 crypto: 00:03:37.979 00:03:37.979 compress: 00:03:37.979 00:03:37.979 vdpa: 00:03:37.979 00:03:37.979 00:03:37.979 Message: 00:03:37.979 ================= 00:03:37.979 Content Skipped 00:03:37.979 ================= 00:03:37.979 00:03:37.979 apps: 00:03:37.979 dumpcap: explicitly disabled via build config 00:03:37.979 graph: explicitly disabled via build config 00:03:37.979 pdump: explicitly disabled via build config 00:03:37.979 proc-info: explicitly disabled via build config 00:03:37.979 test-acl: explicitly disabled via build config 00:03:37.979 test-bbdev: explicitly disabled via build config 00:03:37.979 test-cmdline: explicitly disabled via build config 00:03:37.979 test-compress-perf: explicitly disabled via build config 00:03:37.979 test-crypto-perf: explicitly disabled via build config 00:03:37.979 test-dma-perf: explicitly disabled via build config 00:03:37.979 test-eventdev: explicitly disabled via build config 00:03:37.979 test-fib: explicitly disabled via build config 00:03:37.979 test-flow-perf: explicitly disabled via build config 00:03:37.979 test-gpudev: explicitly disabled via build config 00:03:37.979 test-mldev: explicitly disabled via build config 00:03:37.979 test-pipeline: explicitly disabled via build config 00:03:37.979 test-pmd: explicitly disabled via build config 00:03:37.979 test-regex: explicitly disabled via build config 00:03:37.979 test-sad: explicitly disabled via build config 00:03:37.979 test-security-perf: explicitly disabled via build config 00:03:37.979 00:03:37.979 libs: 00:03:37.979 argparse: explicitly disabled via build config 00:03:37.979 metrics: explicitly disabled via build config 00:03:37.979 acl: explicitly disabled via build config 00:03:37.979 bbdev: explicitly disabled via build config 00:03:37.979 bitratestats: explicitly disabled via build config 00:03:37.979 bpf: explicitly disabled via build config 00:03:37.979 cfgfile: explicitly disabled via build config 00:03:37.979 distributor: explicitly disabled via build config 00:03:37.979 efd: explicitly disabled via build config 00:03:37.979 eventdev: explicitly disabled via build config 00:03:37.979 dispatcher: explicitly disabled via build config 00:03:37.979 gpudev: explicitly disabled via build config 00:03:37.979 gro: explicitly disabled via build config 00:03:37.979 gso: explicitly disabled via build config 00:03:37.979 ip_frag: explicitly disabled via build config 00:03:37.979 jobstats: explicitly disabled via build config 00:03:37.979 latencystats: explicitly disabled via build config 00:03:37.979 lpm: explicitly disabled via build config 00:03:37.979 member: explicitly disabled via build config 00:03:37.979 pcapng: explicitly disabled via build config 00:03:37.979 rawdev: explicitly disabled via build config 00:03:37.979 regexdev: explicitly disabled via build config 00:03:37.979 mldev: explicitly disabled via build config 00:03:37.979 rib: explicitly disabled via build config 00:03:37.979 sched: explicitly disabled via build config 00:03:37.979 stack: explicitly disabled via build config 00:03:37.979 ipsec: explicitly disabled via build config 00:03:37.979 pdcp: explicitly disabled via build config 00:03:37.979 fib: explicitly disabled via build config 00:03:37.979 port: explicitly disabled via build config 00:03:37.979 pdump: explicitly disabled via build config 00:03:37.979 table: explicitly disabled via build config 00:03:37.979 pipeline: explicitly disabled via build config 00:03:37.979 graph: explicitly disabled via build config 00:03:37.979 node: explicitly disabled via build config 00:03:37.979 00:03:37.979 drivers: 00:03:37.979 common/cpt: not in enabled drivers build config 00:03:37.979 common/dpaax: not in enabled drivers build config 00:03:37.979 common/iavf: not in enabled drivers build config 00:03:37.979 common/idpf: not in enabled drivers build config 00:03:37.979 common/ionic: not in enabled drivers build config 00:03:37.979 common/mvep: not in enabled drivers build config 00:03:37.979 common/octeontx: not in enabled drivers build config 00:03:37.979 bus/auxiliary: not in enabled drivers build config 00:03:37.979 bus/cdx: not in enabled drivers build config 00:03:37.979 bus/dpaa: not in enabled drivers build config 00:03:37.979 bus/fslmc: not in enabled drivers build config 00:03:37.979 bus/ifpga: not in enabled drivers build config 00:03:37.979 bus/platform: not in enabled drivers build config 00:03:37.979 bus/uacce: not in enabled drivers build config 00:03:37.979 bus/vmbus: not in enabled drivers build config 00:03:37.979 common/cnxk: not in enabled drivers build config 00:03:37.979 common/mlx5: not in enabled drivers build config 00:03:37.979 common/nfp: not in enabled drivers build config 00:03:37.979 common/nitrox: not in enabled drivers build config 00:03:37.979 common/qat: not in enabled drivers build config 00:03:37.979 common/sfc_efx: not in enabled drivers build config 00:03:37.979 mempool/bucket: not in enabled drivers build config 00:03:37.979 mempool/cnxk: not in enabled drivers build config 00:03:37.979 mempool/dpaa: not in enabled drivers build config 00:03:37.979 mempool/dpaa2: not in enabled drivers build config 00:03:37.979 mempool/octeontx: not in enabled drivers build config 00:03:37.979 mempool/stack: not in enabled drivers build config 00:03:37.979 dma/cnxk: not in enabled drivers build config 00:03:37.979 dma/dpaa: not in enabled drivers build config 00:03:37.979 dma/dpaa2: not in enabled drivers build config 00:03:37.979 dma/hisilicon: not in enabled drivers build config 00:03:37.979 dma/idxd: not in enabled drivers build config 00:03:37.979 dma/ioat: not in enabled drivers build config 00:03:37.979 dma/skeleton: not in enabled drivers build config 00:03:37.979 net/af_packet: not in enabled drivers build config 00:03:37.979 net/af_xdp: not in enabled drivers build config 00:03:37.979 net/ark: not in enabled drivers build config 00:03:37.979 net/atlantic: not in enabled drivers build config 00:03:37.980 net/avp: not in enabled drivers build config 00:03:37.980 net/axgbe: not in enabled drivers build config 00:03:37.980 net/bnx2x: not in enabled drivers build config 00:03:37.980 net/bnxt: not in enabled drivers build config 00:03:37.980 net/bonding: not in enabled drivers build config 00:03:37.980 net/cnxk: not in enabled drivers build config 00:03:37.980 net/cpfl: not in enabled drivers build config 00:03:37.980 net/cxgbe: not in enabled drivers build config 00:03:37.980 net/dpaa: not in enabled drivers build config 00:03:37.980 net/dpaa2: not in enabled drivers build config 00:03:37.980 net/e1000: not in enabled drivers build config 00:03:37.980 net/ena: not in enabled drivers build config 00:03:37.980 net/enetc: not in enabled drivers build config 00:03:37.980 net/enetfec: not in enabled drivers build config 00:03:37.980 net/enic: not in enabled drivers build config 00:03:37.980 net/failsafe: not in enabled drivers build config 00:03:37.980 net/fm10k: not in enabled drivers build config 00:03:37.980 net/gve: not in enabled drivers build config 00:03:37.980 net/hinic: not in enabled drivers build config 00:03:37.980 net/hns3: not in enabled drivers build config 00:03:37.980 net/i40e: not in enabled drivers build config 00:03:37.980 net/iavf: not in enabled drivers build config 00:03:37.980 net/ice: not in enabled drivers build config 00:03:37.980 net/idpf: not in enabled drivers build config 00:03:37.980 net/igc: not in enabled drivers build config 00:03:37.980 net/ionic: not in enabled drivers build config 00:03:37.980 net/ipn3ke: not in enabled drivers build config 00:03:37.980 net/ixgbe: not in enabled drivers build config 00:03:37.980 net/mana: not in enabled drivers build config 00:03:37.980 net/memif: not in enabled drivers build config 00:03:37.980 net/mlx4: not in enabled drivers build config 00:03:37.980 net/mlx5: not in enabled drivers build config 00:03:37.980 net/mvneta: not in enabled drivers build config 00:03:37.980 net/mvpp2: not in enabled drivers build config 00:03:37.980 net/netvsc: not in enabled drivers build config 00:03:37.980 net/nfb: not in enabled drivers build config 00:03:37.980 net/nfp: not in enabled drivers build config 00:03:37.980 net/ngbe: not in enabled drivers build config 00:03:37.980 net/null: not in enabled drivers build config 00:03:37.980 net/octeontx: not in enabled drivers build config 00:03:37.980 net/octeon_ep: not in enabled drivers build config 00:03:37.980 net/pcap: not in enabled drivers build config 00:03:37.980 net/pfe: not in enabled drivers build config 00:03:37.980 net/qede: not in enabled drivers build config 00:03:37.980 net/ring: not in enabled drivers build config 00:03:37.980 net/sfc: not in enabled drivers build config 00:03:37.980 net/softnic: not in enabled drivers build config 00:03:37.980 net/tap: not in enabled drivers build config 00:03:37.980 net/thunderx: not in enabled drivers build config 00:03:37.980 net/txgbe: not in enabled drivers build config 00:03:37.980 net/vdev_netvsc: not in enabled drivers build config 00:03:37.980 net/vhost: not in enabled drivers build config 00:03:37.980 net/virtio: not in enabled drivers build config 00:03:37.980 net/vmxnet3: not in enabled drivers build config 00:03:37.980 raw/*: missing internal dependency, "rawdev" 00:03:37.980 crypto/armv8: not in enabled drivers build config 00:03:37.980 crypto/bcmfs: not in enabled drivers build config 00:03:37.980 crypto/caam_jr: not in enabled drivers build config 00:03:37.980 crypto/ccp: not in enabled drivers build config 00:03:37.980 crypto/cnxk: not in enabled drivers build config 00:03:37.980 crypto/dpaa_sec: not in enabled drivers build config 00:03:37.980 crypto/dpaa2_sec: not in enabled drivers build config 00:03:37.980 crypto/ipsec_mb: not in enabled drivers build config 00:03:37.980 crypto/mlx5: not in enabled drivers build config 00:03:37.980 crypto/mvsam: not in enabled drivers build config 00:03:37.980 crypto/nitrox: not in enabled drivers build config 00:03:37.980 crypto/null: not in enabled drivers build config 00:03:37.980 crypto/octeontx: not in enabled drivers build config 00:03:37.980 crypto/openssl: not in enabled drivers build config 00:03:37.980 crypto/scheduler: not in enabled drivers build config 00:03:37.980 crypto/uadk: not in enabled drivers build config 00:03:37.980 crypto/virtio: not in enabled drivers build config 00:03:37.980 compress/isal: not in enabled drivers build config 00:03:37.980 compress/mlx5: not in enabled drivers build config 00:03:37.980 compress/nitrox: not in enabled drivers build config 00:03:37.980 compress/octeontx: not in enabled drivers build config 00:03:37.980 compress/zlib: not in enabled drivers build config 00:03:37.980 regex/*: missing internal dependency, "regexdev" 00:03:37.980 ml/*: missing internal dependency, "mldev" 00:03:37.980 vdpa/ifc: not in enabled drivers build config 00:03:37.980 vdpa/mlx5: not in enabled drivers build config 00:03:37.980 vdpa/nfp: not in enabled drivers build config 00:03:37.980 vdpa/sfc: not in enabled drivers build config 00:03:37.980 event/*: missing internal dependency, "eventdev" 00:03:37.980 baseband/*: missing internal dependency, "bbdev" 00:03:37.980 gpu/*: missing internal dependency, "gpudev" 00:03:37.980 00:03:37.980 00:03:37.980 Build targets in project: 85 00:03:37.980 00:03:37.980 DPDK 24.03.0 00:03:37.980 00:03:37.980 User defined options 00:03:37.980 buildtype : debug 00:03:37.980 default_library : shared 00:03:37.980 libdir : lib 00:03:37.980 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:37.980 b_sanitize : address 00:03:37.980 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:37.980 c_link_args : 00:03:37.980 cpu_instruction_set: native 00:03:37.980 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:37.980 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:37.980 enable_docs : false 00:03:37.980 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:37.980 enable_kmods : false 00:03:37.980 max_lcores : 128 00:03:37.980 tests : false 00:03:37.980 00:03:37.980 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:37.980 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:38.239 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:38.239 [2/268] Linking static target lib/librte_kvargs.a 00:03:38.239 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:38.239 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:38.239 [5/268] Linking static target lib/librte_log.a 00:03:38.239 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:38.498 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.757 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:38.757 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:38.757 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:38.757 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:38.757 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:38.757 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:38.757 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:38.757 [15/268] Linking static target lib/librte_telemetry.a 00:03:38.757 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:38.757 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:38.757 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:39.323 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:39.323 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:39.323 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:39.323 [22/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.323 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:39.324 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:39.324 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:39.324 [26/268] Linking target lib/librte_log.so.24.1 00:03:39.324 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:39.582 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:39.582 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:39.582 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:39.582 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.840 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:39.840 [33/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:39.840 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:39.840 [35/268] Linking target lib/librte_kvargs.so.24.1 00:03:39.840 [36/268] Linking target lib/librte_telemetry.so.24.1 00:03:40.098 [37/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:40.098 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:40.098 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:40.098 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:40.098 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:40.098 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:40.098 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:40.098 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:40.356 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:40.356 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:40.356 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:40.356 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:40.614 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:40.614 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:40.614 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:40.871 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:40.871 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:40.871 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:40.871 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:41.128 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:41.128 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:41.128 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:41.128 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:41.128 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:41.128 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:41.385 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:41.385 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:41.385 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:41.385 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:41.385 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:41.643 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:41.643 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:41.902 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:41.902 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:41.902 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:41.902 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:42.160 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:42.160 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:42.160 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:42.160 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:42.161 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:42.161 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:42.161 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:42.419 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:42.419 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:42.419 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:42.419 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:42.419 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:42.686 [85/268] Linking static target lib/librte_eal.a 00:03:42.686 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:42.686 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:42.686 [88/268] Linking static target lib/librte_ring.a 00:03:42.686 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:42.945 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:42.945 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:42.945 [92/268] Linking static target lib/librte_mempool.a 00:03:42.945 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:42.945 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:42.945 [95/268] Linking static target lib/librte_rcu.a 00:03:42.945 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:43.203 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:43.203 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:43.203 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:43.203 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.462 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:43.462 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:43.720 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:43.720 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:43.720 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.720 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:43.720 [107/268] Linking static target lib/librte_net.a 00:03:43.978 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:43.978 [109/268] Linking static target lib/librte_meter.a 00:03:43.978 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:43.978 [111/268] Linking static target lib/librte_mbuf.a 00:03:44.236 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:44.236 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:44.236 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:44.237 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.237 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.237 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.237 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:44.804 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:44.805 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:44.805 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:45.063 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:45.322 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.322 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:45.322 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:45.322 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:45.322 [127/268] Linking static target lib/librte_pci.a 00:03:45.322 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:45.322 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:45.581 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:45.581 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:45.581 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:45.581 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:45.581 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:45.581 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:45.581 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:45.581 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:45.841 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.841 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:45.841 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:45.841 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:45.841 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:45.841 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:45.841 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:45.841 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:45.841 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:45.841 [147/268] Linking static target lib/librte_cmdline.a 00:03:46.101 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:46.361 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:46.361 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:46.621 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:46.621 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:46.621 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:46.621 [154/268] Linking static target lib/librte_timer.a 00:03:46.621 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:46.621 [156/268] Linking static target lib/librte_ethdev.a 00:03:46.621 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:46.881 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:46.881 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:46.881 [160/268] Linking static target lib/librte_compressdev.a 00:03:46.881 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:47.140 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:47.140 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:47.140 [164/268] Linking static target lib/librte_hash.a 00:03:47.399 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:47.399 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:47.399 [167/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.399 [168/268] Linking static target lib/librte_dmadev.a 00:03:47.399 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:47.658 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:47.658 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.658 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:47.918 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.918 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:47.918 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:48.176 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:48.176 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:48.176 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:48.176 [179/268] Linking static target lib/librte_cryptodev.a 00:03:48.176 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:48.176 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:48.436 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.436 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:48.436 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.436 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:48.436 [186/268] Linking static target lib/librte_power.a 00:03:48.695 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:48.695 [188/268] Linking static target lib/librte_reorder.a 00:03:48.954 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:48.954 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:48.954 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:48.954 [192/268] Linking static target lib/librte_security.a 00:03:49.213 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:49.214 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:49.472 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.731 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.731 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:49.731 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.990 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:49.990 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:49.990 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:50.249 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:50.249 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:50.508 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:50.508 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:50.508 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:50.767 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:50.767 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:50.767 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:50.767 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:51.034 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.034 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:51.034 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:51.035 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:51.035 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:51.035 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:51.035 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:51.035 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:51.035 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:51.296 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:51.296 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:51.296 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.555 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:51.555 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:51.555 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:51.555 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:51.814 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:52.382 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:55.668 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.668 [230/268] Linking target lib/librte_eal.so.24.1 00:03:55.927 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:55.927 [232/268] Linking target lib/librte_ring.so.24.1 00:03:55.927 [233/268] Linking target lib/librte_pci.so.24.1 00:03:55.927 [234/268] Linking target lib/librte_meter.so.24.1 00:03:55.927 [235/268] Linking target lib/librte_timer.so.24.1 00:03:55.927 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:55.927 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:55.927 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:55.927 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:55.927 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:55.927 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:55.927 [242/268] Linking target lib/librte_mempool.so.24.1 00:03:55.927 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:55.927 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:55.927 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:56.271 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:56.271 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:56.271 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:56.271 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:56.271 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.271 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:56.271 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:56.271 [253/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:56.271 [254/268] Linking target lib/librte_net.so.24.1 00:03:56.271 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:56.530 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:03:56.530 [257/268] Linking static target lib/librte_vhost.a 00:03:56.530 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:56.530 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:56.530 [260/268] Linking target lib/librte_hash.so.24.1 00:03:56.530 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:56.530 [262/268] Linking target lib/librte_security.so.24.1 00:03:56.530 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:56.789 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:56.789 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:56.789 [266/268] Linking target lib/librte_power.so.24.1 00:03:59.325 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.325 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:59.325 INFO: autodetecting backend as ninja 00:03:59.325 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:17.502 CC lib/ut_mock/mock.o 00:04:17.502 CC lib/ut/ut.o 00:04:17.502 CC lib/log/log_deprecated.o 00:04:17.502 CC lib/log/log.o 00:04:17.502 CC lib/log/log_flags.o 00:04:17.502 LIB libspdk_ut_mock.a 00:04:17.502 SO libspdk_ut_mock.so.6.0 00:04:17.502 LIB libspdk_log.a 00:04:17.502 LIB libspdk_ut.a 00:04:17.502 SO libspdk_ut.so.2.0 00:04:17.502 SO libspdk_log.so.7.1 00:04:17.502 SYMLINK libspdk_ut_mock.so 00:04:17.502 SYMLINK libspdk_ut.so 00:04:17.502 SYMLINK libspdk_log.so 00:04:17.502 CC lib/dma/dma.o 00:04:17.502 CC lib/ioat/ioat.o 00:04:17.502 CC lib/util/base64.o 00:04:17.502 CC lib/util/crc16.o 00:04:17.502 CC lib/util/bit_array.o 00:04:17.502 CC lib/util/crc32c.o 00:04:17.502 CC lib/util/crc32.o 00:04:17.502 CC lib/util/cpuset.o 00:04:17.502 CXX lib/trace_parser/trace.o 00:04:17.502 CC lib/vfio_user/host/vfio_user_pci.o 00:04:17.502 CC lib/vfio_user/host/vfio_user.o 00:04:17.502 LIB libspdk_dma.a 00:04:17.502 CC lib/util/crc32_ieee.o 00:04:17.502 SO libspdk_dma.so.5.0 00:04:17.502 CC lib/util/crc64.o 00:04:17.502 CC lib/util/dif.o 00:04:17.502 CC lib/util/fd.o 00:04:17.502 SYMLINK libspdk_dma.so 00:04:17.502 CC lib/util/fd_group.o 00:04:17.502 CC lib/util/file.o 00:04:17.502 CC lib/util/hexlify.o 00:04:17.502 CC lib/util/iov.o 00:04:17.502 CC lib/util/math.o 00:04:17.502 LIB libspdk_vfio_user.a 00:04:17.502 CC lib/util/net.o 00:04:17.502 SO libspdk_vfio_user.so.5.0 00:04:17.502 LIB libspdk_ioat.a 00:04:17.502 SO libspdk_ioat.so.7.0 00:04:17.502 SYMLINK libspdk_vfio_user.so 00:04:17.502 CC lib/util/pipe.o 00:04:17.502 CC lib/util/strerror_tls.o 00:04:17.502 SYMLINK libspdk_ioat.so 00:04:17.502 CC lib/util/string.o 00:04:17.502 CC lib/util/uuid.o 00:04:17.502 CC lib/util/xor.o 00:04:17.502 CC lib/util/zipf.o 00:04:17.502 CC lib/util/md5.o 00:04:18.070 LIB libspdk_util.a 00:04:18.070 LIB libspdk_trace_parser.a 00:04:18.070 SO libspdk_util.so.10.1 00:04:18.070 SO libspdk_trace_parser.so.6.0 00:04:18.329 SYMLINK libspdk_util.so 00:04:18.329 SYMLINK libspdk_trace_parser.so 00:04:18.588 CC lib/conf/conf.o 00:04:18.588 CC lib/rdma_utils/rdma_utils.o 00:04:18.588 CC lib/json/json_parse.o 00:04:18.588 CC lib/json/json_util.o 00:04:18.588 CC lib/json/json_write.o 00:04:18.588 CC lib/idxd/idxd.o 00:04:18.588 CC lib/env_dpdk/env.o 00:04:18.588 CC lib/idxd/idxd_user.o 00:04:18.588 CC lib/env_dpdk/memory.o 00:04:18.588 CC lib/vmd/vmd.o 00:04:18.846 LIB libspdk_conf.a 00:04:18.846 SO libspdk_conf.so.6.0 00:04:18.846 CC lib/env_dpdk/pci.o 00:04:18.846 CC lib/idxd/idxd_kernel.o 00:04:18.846 CC lib/vmd/led.o 00:04:18.846 LIB libspdk_rdma_utils.a 00:04:18.846 SO libspdk_rdma_utils.so.1.0 00:04:18.846 LIB libspdk_json.a 00:04:18.846 SYMLINK libspdk_conf.so 00:04:18.846 CC lib/env_dpdk/init.o 00:04:18.846 SO libspdk_json.so.6.0 00:04:18.846 SYMLINK libspdk_rdma_utils.so 00:04:18.846 CC lib/env_dpdk/threads.o 00:04:18.846 SYMLINK libspdk_json.so 00:04:19.105 CC lib/env_dpdk/pci_ioat.o 00:04:19.105 CC lib/env_dpdk/pci_virtio.o 00:04:19.105 CC lib/env_dpdk/pci_vmd.o 00:04:19.105 CC lib/jsonrpc/jsonrpc_server.o 00:04:19.105 CC lib/rdma_provider/common.o 00:04:19.105 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:19.105 CC lib/env_dpdk/pci_idxd.o 00:04:19.364 LIB libspdk_idxd.a 00:04:19.364 CC lib/env_dpdk/pci_event.o 00:04:19.364 CC lib/env_dpdk/sigbus_handler.o 00:04:19.364 CC lib/env_dpdk/pci_dpdk.o 00:04:19.364 SO libspdk_idxd.so.12.1 00:04:19.364 LIB libspdk_vmd.a 00:04:19.364 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:19.364 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.364 SYMLINK libspdk_idxd.so 00:04:19.364 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:19.364 SO libspdk_vmd.so.6.0 00:04:19.364 CC lib/jsonrpc/jsonrpc_client.o 00:04:19.364 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:19.364 LIB libspdk_rdma_provider.a 00:04:19.364 SYMLINK libspdk_vmd.so 00:04:19.364 SO libspdk_rdma_provider.so.7.0 00:04:19.623 SYMLINK libspdk_rdma_provider.so 00:04:19.623 LIB libspdk_jsonrpc.a 00:04:19.623 SO libspdk_jsonrpc.so.6.0 00:04:19.882 SYMLINK libspdk_jsonrpc.so 00:04:20.141 LIB libspdk_env_dpdk.a 00:04:20.141 CC lib/rpc/rpc.o 00:04:20.400 SO libspdk_env_dpdk.so.15.1 00:04:20.400 SYMLINK libspdk_env_dpdk.so 00:04:20.400 LIB libspdk_rpc.a 00:04:20.400 SO libspdk_rpc.so.6.0 00:04:20.659 SYMLINK libspdk_rpc.so 00:04:20.918 CC lib/trace/trace.o 00:04:20.918 CC lib/trace/trace_flags.o 00:04:20.918 CC lib/keyring/keyring.o 00:04:20.918 CC lib/trace/trace_rpc.o 00:04:20.918 CC lib/keyring/keyring_rpc.o 00:04:20.918 CC lib/notify/notify.o 00:04:20.918 CC lib/notify/notify_rpc.o 00:04:21.176 LIB libspdk_notify.a 00:04:21.176 SO libspdk_notify.so.6.0 00:04:21.176 LIB libspdk_keyring.a 00:04:21.176 LIB libspdk_trace.a 00:04:21.176 SO libspdk_keyring.so.2.0 00:04:21.176 SYMLINK libspdk_notify.so 00:04:21.176 SO libspdk_trace.so.11.0 00:04:21.435 SYMLINK libspdk_keyring.so 00:04:21.435 SYMLINK libspdk_trace.so 00:04:21.693 CC lib/sock/sock_rpc.o 00:04:21.693 CC lib/sock/sock.o 00:04:21.693 CC lib/thread/thread.o 00:04:21.693 CC lib/thread/iobuf.o 00:04:22.260 LIB libspdk_sock.a 00:04:22.260 SO libspdk_sock.so.10.0 00:04:22.260 SYMLINK libspdk_sock.so 00:04:22.826 CC lib/nvme/nvme_ctrlr.o 00:04:22.826 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:22.826 CC lib/nvme/nvme_fabric.o 00:04:22.826 CC lib/nvme/nvme_pcie_common.o 00:04:22.826 CC lib/nvme/nvme_ns_cmd.o 00:04:22.826 CC lib/nvme/nvme.o 00:04:22.826 CC lib/nvme/nvme_ns.o 00:04:22.826 CC lib/nvme/nvme_qpair.o 00:04:22.826 CC lib/nvme/nvme_pcie.o 00:04:23.394 LIB libspdk_thread.a 00:04:23.394 CC lib/nvme/nvme_quirks.o 00:04:23.394 CC lib/nvme/nvme_transport.o 00:04:23.394 SO libspdk_thread.so.11.0 00:04:23.653 SYMLINK libspdk_thread.so 00:04:23.653 CC lib/nvme/nvme_discovery.o 00:04:23.653 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:23.653 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:23.653 CC lib/nvme/nvme_tcp.o 00:04:23.653 CC lib/accel/accel.o 00:04:23.653 CC lib/nvme/nvme_opal.o 00:04:23.912 CC lib/accel/accel_rpc.o 00:04:24.171 CC lib/accel/accel_sw.o 00:04:24.171 CC lib/nvme/nvme_io_msg.o 00:04:24.171 CC lib/nvme/nvme_poll_group.o 00:04:24.171 CC lib/nvme/nvme_zns.o 00:04:24.171 CC lib/nvme/nvme_stubs.o 00:04:24.431 CC lib/blob/blobstore.o 00:04:24.690 CC lib/init/json_config.o 00:04:24.690 CC lib/init/subsystem.o 00:04:24.690 CC lib/virtio/virtio.o 00:04:24.690 CC lib/blob/request.o 00:04:24.690 CC lib/blob/zeroes.o 00:04:24.950 CC lib/init/subsystem_rpc.o 00:04:24.950 CC lib/init/rpc.o 00:04:24.950 CC lib/nvme/nvme_auth.o 00:04:24.950 CC lib/nvme/nvme_cuse.o 00:04:24.950 CC lib/nvme/nvme_rdma.o 00:04:24.950 LIB libspdk_init.a 00:04:25.209 LIB libspdk_accel.a 00:04:25.209 SO libspdk_init.so.6.0 00:04:25.209 CC lib/virtio/virtio_vhost_user.o 00:04:25.209 SO libspdk_accel.so.16.0 00:04:25.209 CC lib/blob/blob_bs_dev.o 00:04:25.209 SYMLINK libspdk_init.so 00:04:25.209 CC lib/virtio/virtio_vfio_user.o 00:04:25.209 SYMLINK libspdk_accel.so 00:04:25.209 CC lib/virtio/virtio_pci.o 00:04:25.468 CC lib/fsdev/fsdev.o 00:04:25.468 CC lib/fsdev/fsdev_io.o 00:04:25.468 CC lib/fsdev/fsdev_rpc.o 00:04:25.468 LIB libspdk_virtio.a 00:04:25.727 CC lib/bdev/bdev.o 00:04:25.727 SO libspdk_virtio.so.7.0 00:04:25.727 CC lib/event/app.o 00:04:25.727 CC lib/event/reactor.o 00:04:25.727 SYMLINK libspdk_virtio.so 00:04:25.727 CC lib/bdev/bdev_rpc.o 00:04:25.986 CC lib/event/log_rpc.o 00:04:25.986 CC lib/event/app_rpc.o 00:04:25.986 CC lib/bdev/bdev_zone.o 00:04:25.986 CC lib/event/scheduler_static.o 00:04:25.986 CC lib/bdev/part.o 00:04:26.245 CC lib/bdev/scsi_nvme.o 00:04:26.245 LIB libspdk_fsdev.a 00:04:26.245 SO libspdk_fsdev.so.2.0 00:04:26.245 LIB libspdk_event.a 00:04:26.505 SO libspdk_event.so.14.0 00:04:26.505 SYMLINK libspdk_fsdev.so 00:04:26.505 SYMLINK libspdk_event.so 00:04:26.505 LIB libspdk_nvme.a 00:04:26.768 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:26.768 SO libspdk_nvme.so.15.0 00:04:27.336 SYMLINK libspdk_nvme.so 00:04:27.595 LIB libspdk_fuse_dispatcher.a 00:04:27.595 SO libspdk_fuse_dispatcher.so.1.0 00:04:27.595 SYMLINK libspdk_fuse_dispatcher.so 00:04:28.533 LIB libspdk_blob.a 00:04:28.533 SO libspdk_blob.so.11.0 00:04:28.533 SYMLINK libspdk_blob.so 00:04:29.102 CC lib/lvol/lvol.o 00:04:29.102 LIB libspdk_bdev.a 00:04:29.102 CC lib/blobfs/blobfs.o 00:04:29.102 CC lib/blobfs/tree.o 00:04:29.102 SO libspdk_bdev.so.17.0 00:04:29.102 SYMLINK libspdk_bdev.so 00:04:29.360 CC lib/ublk/ublk_rpc.o 00:04:29.360 CC lib/ublk/ublk.o 00:04:29.360 CC lib/ftl/ftl_core.o 00:04:29.360 CC lib/ftl/ftl_layout.o 00:04:29.361 CC lib/nvmf/ctrlr.o 00:04:29.361 CC lib/ftl/ftl_init.o 00:04:29.361 CC lib/nbd/nbd.o 00:04:29.361 CC lib/scsi/dev.o 00:04:29.620 CC lib/scsi/lun.o 00:04:29.620 CC lib/nvmf/ctrlr_discovery.o 00:04:29.879 CC lib/nvmf/ctrlr_bdev.o 00:04:29.879 CC lib/scsi/port.o 00:04:29.879 CC lib/ftl/ftl_debug.o 00:04:29.879 CC lib/nbd/nbd_rpc.o 00:04:29.879 LIB libspdk_blobfs.a 00:04:30.137 CC lib/nvmf/subsystem.o 00:04:30.137 SO libspdk_blobfs.so.10.0 00:04:30.137 CC lib/scsi/scsi.o 00:04:30.137 SYMLINK libspdk_blobfs.so 00:04:30.137 CC lib/scsi/scsi_bdev.o 00:04:30.137 LIB libspdk_nbd.a 00:04:30.137 LIB libspdk_lvol.a 00:04:30.137 CC lib/ftl/ftl_io.o 00:04:30.137 SO libspdk_nbd.so.7.0 00:04:30.137 SO libspdk_lvol.so.10.0 00:04:30.137 LIB libspdk_ublk.a 00:04:30.137 CC lib/nvmf/nvmf.o 00:04:30.396 SYMLINK libspdk_lvol.so 00:04:30.396 SYMLINK libspdk_nbd.so 00:04:30.396 CC lib/nvmf/nvmf_rpc.o 00:04:30.396 SO libspdk_ublk.so.3.0 00:04:30.396 CC lib/nvmf/transport.o 00:04:30.396 CC lib/ftl/ftl_sb.o 00:04:30.396 SYMLINK libspdk_ublk.so 00:04:30.396 CC lib/ftl/ftl_l2p.o 00:04:30.396 CC lib/nvmf/tcp.o 00:04:30.655 CC lib/nvmf/stubs.o 00:04:30.655 CC lib/ftl/ftl_l2p_flat.o 00:04:30.655 CC lib/ftl/ftl_nv_cache.o 00:04:30.931 CC lib/scsi/scsi_pr.o 00:04:30.931 CC lib/ftl/ftl_band.o 00:04:31.189 CC lib/ftl/ftl_band_ops.o 00:04:31.189 CC lib/ftl/ftl_writer.o 00:04:31.189 CC lib/scsi/scsi_rpc.o 00:04:31.447 CC lib/ftl/ftl_rq.o 00:04:31.447 CC lib/nvmf/mdns_server.o 00:04:31.447 CC lib/nvmf/rdma.o 00:04:31.447 CC lib/nvmf/auth.o 00:04:31.447 CC lib/scsi/task.o 00:04:31.447 CC lib/ftl/ftl_reloc.o 00:04:31.447 CC lib/ftl/ftl_l2p_cache.o 00:04:31.706 CC lib/ftl/ftl_p2l.o 00:04:31.706 CC lib/ftl/ftl_p2l_log.o 00:04:31.706 LIB libspdk_scsi.a 00:04:31.706 SO libspdk_scsi.so.9.0 00:04:31.966 CC lib/ftl/mngt/ftl_mngt.o 00:04:31.966 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:31.966 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:31.966 SYMLINK libspdk_scsi.so 00:04:31.966 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:32.225 CC lib/iscsi/conn.o 00:04:32.225 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:32.225 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:32.225 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:32.225 CC lib/vhost/vhost.o 00:04:32.225 CC lib/vhost/vhost_rpc.o 00:04:32.225 CC lib/vhost/vhost_scsi.o 00:04:32.484 CC lib/vhost/vhost_blk.o 00:04:32.484 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:32.484 CC lib/iscsi/init_grp.o 00:04:32.484 CC lib/iscsi/iscsi.o 00:04:32.484 CC lib/iscsi/param.o 00:04:32.744 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:32.744 CC lib/iscsi/portal_grp.o 00:04:33.003 CC lib/iscsi/tgt_node.o 00:04:33.003 CC lib/iscsi/iscsi_subsystem.o 00:04:33.003 CC lib/iscsi/iscsi_rpc.o 00:04:33.003 CC lib/iscsi/task.o 00:04:33.003 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:33.003 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:33.262 CC lib/vhost/rte_vhost_user.o 00:04:33.262 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:33.262 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:33.262 CC lib/ftl/utils/ftl_conf.o 00:04:33.262 CC lib/ftl/utils/ftl_md.o 00:04:33.262 CC lib/ftl/utils/ftl_mempool.o 00:04:33.522 CC lib/ftl/utils/ftl_bitmap.o 00:04:33.522 CC lib/ftl/utils/ftl_property.o 00:04:33.522 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:33.522 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:33.522 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:33.522 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:33.781 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:33.781 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:33.781 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:33.781 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:33.781 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:33.781 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:33.781 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:33.781 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:33.781 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:34.040 CC lib/ftl/base/ftl_base_dev.o 00:04:34.041 CC lib/ftl/base/ftl_base_bdev.o 00:04:34.041 CC lib/ftl/ftl_trace.o 00:04:34.041 LIB libspdk_nvmf.a 00:04:34.041 LIB libspdk_iscsi.a 00:04:34.300 SO libspdk_nvmf.so.20.0 00:04:34.300 LIB libspdk_vhost.a 00:04:34.300 SO libspdk_iscsi.so.8.0 00:04:34.300 SO libspdk_vhost.so.8.0 00:04:34.300 LIB libspdk_ftl.a 00:04:34.559 SYMLINK libspdk_iscsi.so 00:04:34.559 SYMLINK libspdk_vhost.so 00:04:34.559 SYMLINK libspdk_nvmf.so 00:04:34.818 SO libspdk_ftl.so.9.0 00:04:35.078 SYMLINK libspdk_ftl.so 00:04:35.336 CC module/env_dpdk/env_dpdk_rpc.o 00:04:35.595 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:35.595 CC module/fsdev/aio/fsdev_aio.o 00:04:35.595 CC module/blob/bdev/blob_bdev.o 00:04:35.595 CC module/keyring/file/keyring.o 00:04:35.595 CC module/sock/posix/posix.o 00:04:35.595 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:35.596 CC module/keyring/linux/keyring.o 00:04:35.596 CC module/scheduler/gscheduler/gscheduler.o 00:04:35.596 CC module/accel/error/accel_error.o 00:04:35.596 LIB libspdk_env_dpdk_rpc.a 00:04:35.596 SO libspdk_env_dpdk_rpc.so.6.0 00:04:35.596 SYMLINK libspdk_env_dpdk_rpc.so 00:04:35.596 CC module/keyring/file/keyring_rpc.o 00:04:35.596 CC module/keyring/linux/keyring_rpc.o 00:04:35.596 CC module/accel/error/accel_error_rpc.o 00:04:35.596 LIB libspdk_scheduler_gscheduler.a 00:04:35.596 LIB libspdk_scheduler_dpdk_governor.a 00:04:35.596 SO libspdk_scheduler_gscheduler.so.4.0 00:04:35.596 LIB libspdk_scheduler_dynamic.a 00:04:35.855 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:35.855 SO libspdk_scheduler_dynamic.so.4.0 00:04:35.855 SYMLINK libspdk_scheduler_gscheduler.so 00:04:35.855 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:35.855 LIB libspdk_keyring_linux.a 00:04:35.855 SYMLINK libspdk_scheduler_dynamic.so 00:04:35.855 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:35.855 LIB libspdk_keyring_file.a 00:04:35.855 LIB libspdk_accel_error.a 00:04:35.855 SO libspdk_keyring_linux.so.1.0 00:04:35.855 LIB libspdk_blob_bdev.a 00:04:35.855 SO libspdk_keyring_file.so.2.0 00:04:35.855 SO libspdk_accel_error.so.2.0 00:04:35.855 SO libspdk_blob_bdev.so.11.0 00:04:35.855 SYMLINK libspdk_keyring_linux.so 00:04:35.855 SYMLINK libspdk_accel_error.so 00:04:35.855 CC module/fsdev/aio/linux_aio_mgr.o 00:04:35.855 SYMLINK libspdk_keyring_file.so 00:04:35.855 SYMLINK libspdk_blob_bdev.so 00:04:35.855 CC module/accel/dsa/accel_dsa_rpc.o 00:04:35.855 CC module/accel/dsa/accel_dsa.o 00:04:35.855 CC module/accel/ioat/accel_ioat.o 00:04:36.114 CC module/accel/ioat/accel_ioat_rpc.o 00:04:36.114 CC module/accel/iaa/accel_iaa.o 00:04:36.114 CC module/accel/iaa/accel_iaa_rpc.o 00:04:36.114 LIB libspdk_accel_ioat.a 00:04:36.114 SO libspdk_accel_ioat.so.6.0 00:04:36.373 CC module/bdev/delay/vbdev_delay.o 00:04:36.373 CC module/blobfs/bdev/blobfs_bdev.o 00:04:36.373 LIB libspdk_accel_iaa.a 00:04:36.373 LIB libspdk_fsdev_aio.a 00:04:36.373 CC module/bdev/error/vbdev_error.o 00:04:36.373 SO libspdk_accel_iaa.so.3.0 00:04:36.373 LIB libspdk_accel_dsa.a 00:04:36.373 CC module/bdev/gpt/gpt.o 00:04:36.373 SYMLINK libspdk_accel_ioat.so 00:04:36.373 CC module/bdev/gpt/vbdev_gpt.o 00:04:36.373 SO libspdk_fsdev_aio.so.1.0 00:04:36.373 SO libspdk_accel_dsa.so.5.0 00:04:36.373 LIB libspdk_sock_posix.a 00:04:36.373 SYMLINK libspdk_accel_iaa.so 00:04:36.373 CC module/bdev/lvol/vbdev_lvol.o 00:04:36.373 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:36.373 SO libspdk_sock_posix.so.6.0 00:04:36.373 SYMLINK libspdk_fsdev_aio.so 00:04:36.373 SYMLINK libspdk_accel_dsa.so 00:04:36.373 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:36.373 SYMLINK libspdk_sock_posix.so 00:04:36.632 CC module/bdev/error/vbdev_error_rpc.o 00:04:36.632 LIB libspdk_blobfs_bdev.a 00:04:36.632 CC module/bdev/malloc/bdev_malloc.o 00:04:36.632 LIB libspdk_bdev_gpt.a 00:04:36.632 SO libspdk_blobfs_bdev.so.6.0 00:04:36.632 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:36.632 LIB libspdk_bdev_delay.a 00:04:36.632 CC module/bdev/null/bdev_null.o 00:04:36.632 CC module/bdev/nvme/bdev_nvme.o 00:04:36.632 SO libspdk_bdev_gpt.so.6.0 00:04:36.632 SO libspdk_bdev_delay.so.6.0 00:04:36.632 CC module/bdev/passthru/vbdev_passthru.o 00:04:36.632 SYMLINK libspdk_blobfs_bdev.so 00:04:36.632 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:36.632 SYMLINK libspdk_bdev_gpt.so 00:04:36.632 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:36.632 LIB libspdk_bdev_error.a 00:04:36.632 SYMLINK libspdk_bdev_delay.so 00:04:36.632 CC module/bdev/nvme/nvme_rpc.o 00:04:36.891 SO libspdk_bdev_error.so.6.0 00:04:36.891 SYMLINK libspdk_bdev_error.so 00:04:36.891 CC module/bdev/nvme/bdev_mdns_client.o 00:04:36.891 CC module/bdev/nvme/vbdev_opal.o 00:04:36.891 CC module/bdev/null/bdev_null_rpc.o 00:04:36.891 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:36.891 LIB libspdk_bdev_passthru.a 00:04:36.891 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:36.891 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:36.891 LIB libspdk_bdev_lvol.a 00:04:37.150 SO libspdk_bdev_passthru.so.6.0 00:04:37.150 SO libspdk_bdev_lvol.so.6.0 00:04:37.150 SYMLINK libspdk_bdev_passthru.so 00:04:37.150 LIB libspdk_bdev_null.a 00:04:37.150 SYMLINK libspdk_bdev_lvol.so 00:04:37.150 CC module/bdev/raid/bdev_raid.o 00:04:37.150 LIB libspdk_bdev_malloc.a 00:04:37.150 SO libspdk_bdev_null.so.6.0 00:04:37.150 SO libspdk_bdev_malloc.so.6.0 00:04:37.150 SYMLINK libspdk_bdev_null.so 00:04:37.150 CC module/bdev/raid/bdev_raid_rpc.o 00:04:37.410 CC module/bdev/split/vbdev_split.o 00:04:37.410 SYMLINK libspdk_bdev_malloc.so 00:04:37.410 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:37.410 CC module/bdev/aio/bdev_aio.o 00:04:37.410 CC module/bdev/xnvme/bdev_xnvme.o 00:04:37.410 CC module/bdev/ftl/bdev_ftl.o 00:04:37.410 CC module/bdev/aio/bdev_aio_rpc.o 00:04:37.410 CC module/bdev/iscsi/bdev_iscsi.o 00:04:37.410 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:37.410 CC module/bdev/split/vbdev_split_rpc.o 00:04:37.670 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:37.670 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:37.670 CC module/bdev/raid/bdev_raid_sb.o 00:04:37.670 LIB libspdk_bdev_aio.a 00:04:37.670 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:37.670 LIB libspdk_bdev_split.a 00:04:37.670 SO libspdk_bdev_aio.so.6.0 00:04:37.670 LIB libspdk_bdev_zone_block.a 00:04:37.670 SO libspdk_bdev_split.so.6.0 00:04:37.670 SO libspdk_bdev_zone_block.so.6.0 00:04:37.670 SYMLINK libspdk_bdev_aio.so 00:04:37.930 LIB libspdk_bdev_xnvme.a 00:04:37.930 CC module/bdev/raid/raid0.o 00:04:37.930 SYMLINK libspdk_bdev_split.so 00:04:37.930 SO libspdk_bdev_xnvme.so.3.0 00:04:37.930 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:37.930 LIB libspdk_bdev_iscsi.a 00:04:37.930 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:37.930 SYMLINK libspdk_bdev_zone_block.so 00:04:37.930 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:37.930 SO libspdk_bdev_iscsi.so.6.0 00:04:37.930 SYMLINK libspdk_bdev_xnvme.so 00:04:37.930 CC module/bdev/raid/raid1.o 00:04:37.930 LIB libspdk_bdev_ftl.a 00:04:37.930 CC module/bdev/raid/concat.o 00:04:37.930 SYMLINK libspdk_bdev_iscsi.so 00:04:37.930 SO libspdk_bdev_ftl.so.6.0 00:04:38.189 SYMLINK libspdk_bdev_ftl.so 00:04:38.189 LIB libspdk_bdev_raid.a 00:04:38.448 SO libspdk_bdev_raid.so.6.0 00:04:38.448 LIB libspdk_bdev_virtio.a 00:04:38.448 SYMLINK libspdk_bdev_raid.so 00:04:38.448 SO libspdk_bdev_virtio.so.6.0 00:04:38.707 SYMLINK libspdk_bdev_virtio.so 00:04:39.664 LIB libspdk_bdev_nvme.a 00:04:39.923 SO libspdk_bdev_nvme.so.7.1 00:04:39.923 SYMLINK libspdk_bdev_nvme.so 00:04:40.489 CC module/event/subsystems/iobuf/iobuf.o 00:04:40.489 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:40.489 CC module/event/subsystems/fsdev/fsdev.o 00:04:40.489 CC module/event/subsystems/vmd/vmd.o 00:04:40.489 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:40.489 CC module/event/subsystems/sock/sock.o 00:04:40.747 CC module/event/subsystems/scheduler/scheduler.o 00:04:40.747 CC module/event/subsystems/keyring/keyring.o 00:04:40.747 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:40.747 LIB libspdk_event_keyring.a 00:04:40.747 LIB libspdk_event_sock.a 00:04:40.747 LIB libspdk_event_scheduler.a 00:04:40.747 LIB libspdk_event_vhost_blk.a 00:04:40.747 LIB libspdk_event_vmd.a 00:04:40.747 LIB libspdk_event_iobuf.a 00:04:40.747 LIB libspdk_event_fsdev.a 00:04:40.747 SO libspdk_event_keyring.so.1.0 00:04:40.747 SO libspdk_event_sock.so.5.0 00:04:40.747 SO libspdk_event_vhost_blk.so.3.0 00:04:40.747 SO libspdk_event_scheduler.so.4.0 00:04:40.747 SO libspdk_event_fsdev.so.1.0 00:04:40.747 SO libspdk_event_iobuf.so.3.0 00:04:40.747 SO libspdk_event_vmd.so.6.0 00:04:40.747 SYMLINK libspdk_event_keyring.so 00:04:40.747 SYMLINK libspdk_event_sock.so 00:04:40.747 SYMLINK libspdk_event_scheduler.so 00:04:40.747 SYMLINK libspdk_event_vhost_blk.so 00:04:40.747 SYMLINK libspdk_event_fsdev.so 00:04:40.747 SYMLINK libspdk_event_vmd.so 00:04:40.747 SYMLINK libspdk_event_iobuf.so 00:04:41.316 CC module/event/subsystems/accel/accel.o 00:04:41.316 LIB libspdk_event_accel.a 00:04:41.579 SO libspdk_event_accel.so.6.0 00:04:41.579 SYMLINK libspdk_event_accel.so 00:04:42.147 CC module/event/subsystems/bdev/bdev.o 00:04:42.147 LIB libspdk_event_bdev.a 00:04:42.147 SO libspdk_event_bdev.so.6.0 00:04:42.406 SYMLINK libspdk_event_bdev.so 00:04:42.665 CC module/event/subsystems/scsi/scsi.o 00:04:42.665 CC module/event/subsystems/nbd/nbd.o 00:04:42.665 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:42.665 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:42.665 CC module/event/subsystems/ublk/ublk.o 00:04:42.924 LIB libspdk_event_nbd.a 00:04:42.924 LIB libspdk_event_ublk.a 00:04:42.924 LIB libspdk_event_scsi.a 00:04:42.924 SO libspdk_event_nbd.so.6.0 00:04:42.924 SO libspdk_event_ublk.so.3.0 00:04:42.924 SO libspdk_event_scsi.so.6.0 00:04:42.924 LIB libspdk_event_nvmf.a 00:04:42.924 SYMLINK libspdk_event_nbd.so 00:04:42.924 SYMLINK libspdk_event_ublk.so 00:04:42.924 SYMLINK libspdk_event_scsi.so 00:04:42.924 SO libspdk_event_nvmf.so.6.0 00:04:42.924 SYMLINK libspdk_event_nvmf.so 00:04:43.183 CC module/event/subsystems/iscsi/iscsi.o 00:04:43.183 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:43.441 LIB libspdk_event_vhost_scsi.a 00:04:43.441 LIB libspdk_event_iscsi.a 00:04:43.441 SO libspdk_event_vhost_scsi.so.3.0 00:04:43.441 SO libspdk_event_iscsi.so.6.0 00:04:43.441 SYMLINK libspdk_event_vhost_scsi.so 00:04:43.700 SYMLINK libspdk_event_iscsi.so 00:04:43.700 SO libspdk.so.6.0 00:04:43.700 SYMLINK libspdk.so 00:04:44.266 CC app/trace_record/trace_record.o 00:04:44.266 CC app/spdk_lspci/spdk_lspci.o 00:04:44.266 CC app/spdk_nvme_identify/identify.o 00:04:44.266 CXX app/trace/trace.o 00:04:44.266 CC app/spdk_nvme_perf/perf.o 00:04:44.266 CC app/nvmf_tgt/nvmf_main.o 00:04:44.266 CC app/iscsi_tgt/iscsi_tgt.o 00:04:44.266 CC app/spdk_tgt/spdk_tgt.o 00:04:44.266 CC examples/util/zipf/zipf.o 00:04:44.266 CC test/thread/poller_perf/poller_perf.o 00:04:44.266 LINK spdk_lspci 00:04:44.266 LINK nvmf_tgt 00:04:44.266 LINK iscsi_tgt 00:04:44.266 LINK zipf 00:04:44.524 LINK poller_perf 00:04:44.525 LINK spdk_tgt 00:04:44.525 LINK spdk_trace_record 00:04:44.525 CC app/spdk_nvme_discover/discovery_aer.o 00:04:44.525 LINK spdk_trace 00:04:44.785 TEST_HEADER include/spdk/accel.h 00:04:44.785 TEST_HEADER include/spdk/accel_module.h 00:04:44.785 TEST_HEADER include/spdk/assert.h 00:04:44.785 TEST_HEADER include/spdk/barrier.h 00:04:44.785 TEST_HEADER include/spdk/base64.h 00:04:44.785 TEST_HEADER include/spdk/bdev.h 00:04:44.785 TEST_HEADER include/spdk/bdev_module.h 00:04:44.785 TEST_HEADER include/spdk/bdev_zone.h 00:04:44.785 TEST_HEADER include/spdk/bit_array.h 00:04:44.785 TEST_HEADER include/spdk/bit_pool.h 00:04:44.785 TEST_HEADER include/spdk/blob_bdev.h 00:04:44.785 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:44.785 TEST_HEADER include/spdk/blobfs.h 00:04:44.785 TEST_HEADER include/spdk/blob.h 00:04:44.785 TEST_HEADER include/spdk/conf.h 00:04:44.785 TEST_HEADER include/spdk/config.h 00:04:44.785 TEST_HEADER include/spdk/cpuset.h 00:04:44.785 TEST_HEADER include/spdk/crc16.h 00:04:44.785 TEST_HEADER include/spdk/crc32.h 00:04:44.785 TEST_HEADER include/spdk/crc64.h 00:04:44.785 TEST_HEADER include/spdk/dif.h 00:04:44.785 TEST_HEADER include/spdk/dma.h 00:04:44.785 CC examples/ioat/perf/perf.o 00:04:44.785 TEST_HEADER include/spdk/endian.h 00:04:44.785 TEST_HEADER include/spdk/env_dpdk.h 00:04:44.785 TEST_HEADER include/spdk/env.h 00:04:44.785 TEST_HEADER include/spdk/event.h 00:04:44.785 TEST_HEADER include/spdk/fd_group.h 00:04:44.785 TEST_HEADER include/spdk/fd.h 00:04:44.785 TEST_HEADER include/spdk/file.h 00:04:44.785 TEST_HEADER include/spdk/fsdev.h 00:04:44.785 TEST_HEADER include/spdk/fsdev_module.h 00:04:44.785 TEST_HEADER include/spdk/ftl.h 00:04:44.785 CC app/spdk_top/spdk_top.o 00:04:44.785 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:44.785 TEST_HEADER include/spdk/gpt_spec.h 00:04:44.785 LINK spdk_nvme_discover 00:04:44.785 TEST_HEADER include/spdk/hexlify.h 00:04:44.785 TEST_HEADER include/spdk/histogram_data.h 00:04:44.785 TEST_HEADER include/spdk/idxd.h 00:04:44.785 TEST_HEADER include/spdk/idxd_spec.h 00:04:44.785 TEST_HEADER include/spdk/init.h 00:04:44.785 TEST_HEADER include/spdk/ioat.h 00:04:44.785 TEST_HEADER include/spdk/ioat_spec.h 00:04:44.785 TEST_HEADER include/spdk/iscsi_spec.h 00:04:44.785 TEST_HEADER include/spdk/json.h 00:04:44.785 TEST_HEADER include/spdk/jsonrpc.h 00:04:44.785 CC examples/vmd/lsvmd/lsvmd.o 00:04:44.785 TEST_HEADER include/spdk/keyring.h 00:04:44.785 TEST_HEADER include/spdk/keyring_module.h 00:04:44.785 CC test/dma/test_dma/test_dma.o 00:04:44.785 TEST_HEADER include/spdk/likely.h 00:04:44.785 TEST_HEADER include/spdk/log.h 00:04:44.785 TEST_HEADER include/spdk/lvol.h 00:04:44.785 CC test/app/bdev_svc/bdev_svc.o 00:04:44.785 TEST_HEADER include/spdk/md5.h 00:04:44.785 TEST_HEADER include/spdk/memory.h 00:04:44.785 TEST_HEADER include/spdk/mmio.h 00:04:44.785 TEST_HEADER include/spdk/nbd.h 00:04:44.785 TEST_HEADER include/spdk/net.h 00:04:44.785 TEST_HEADER include/spdk/notify.h 00:04:44.785 TEST_HEADER include/spdk/nvme.h 00:04:44.785 TEST_HEADER include/spdk/nvme_intel.h 00:04:44.785 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:44.785 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:44.785 TEST_HEADER include/spdk/nvme_spec.h 00:04:44.785 TEST_HEADER include/spdk/nvme_zns.h 00:04:44.785 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:44.785 CC examples/vmd/led/led.o 00:04:44.785 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:44.785 TEST_HEADER include/spdk/nvmf.h 00:04:44.785 TEST_HEADER include/spdk/nvmf_spec.h 00:04:44.785 TEST_HEADER include/spdk/nvmf_transport.h 00:04:44.785 TEST_HEADER include/spdk/opal.h 00:04:44.785 TEST_HEADER include/spdk/opal_spec.h 00:04:44.785 TEST_HEADER include/spdk/pci_ids.h 00:04:44.785 TEST_HEADER include/spdk/pipe.h 00:04:44.785 TEST_HEADER include/spdk/queue.h 00:04:44.785 TEST_HEADER include/spdk/reduce.h 00:04:44.785 TEST_HEADER include/spdk/rpc.h 00:04:44.785 TEST_HEADER include/spdk/scheduler.h 00:04:44.785 TEST_HEADER include/spdk/scsi.h 00:04:44.785 TEST_HEADER include/spdk/scsi_spec.h 00:04:44.785 TEST_HEADER include/spdk/sock.h 00:04:44.785 TEST_HEADER include/spdk/stdinc.h 00:04:44.785 TEST_HEADER include/spdk/string.h 00:04:44.785 TEST_HEADER include/spdk/thread.h 00:04:44.785 TEST_HEADER include/spdk/trace.h 00:04:44.785 TEST_HEADER include/spdk/trace_parser.h 00:04:44.785 TEST_HEADER include/spdk/tree.h 00:04:44.785 TEST_HEADER include/spdk/ublk.h 00:04:44.785 TEST_HEADER include/spdk/util.h 00:04:44.785 TEST_HEADER include/spdk/uuid.h 00:04:44.785 TEST_HEADER include/spdk/version.h 00:04:44.785 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:44.785 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:44.785 TEST_HEADER include/spdk/vhost.h 00:04:44.785 TEST_HEADER include/spdk/vmd.h 00:04:44.785 TEST_HEADER include/spdk/xor.h 00:04:44.785 TEST_HEADER include/spdk/zipf.h 00:04:44.785 CXX test/cpp_headers/accel.o 00:04:45.045 LINK lsvmd 00:04:45.045 LINK ioat_perf 00:04:45.045 LINK bdev_svc 00:04:45.045 LINK led 00:04:45.045 CXX test/cpp_headers/accel_module.o 00:04:45.045 LINK spdk_nvme_identify 00:04:45.045 LINK spdk_nvme_perf 00:04:45.304 CC test/env/mem_callbacks/mem_callbacks.o 00:04:45.304 CC examples/ioat/verify/verify.o 00:04:45.304 CXX test/cpp_headers/assert.o 00:04:45.304 CC test/rpc_client/rpc_client_test.o 00:04:45.304 CC test/event/event_perf/event_perf.o 00:04:45.304 LINK test_dma 00:04:45.304 CC test/env/vtophys/vtophys.o 00:04:45.304 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:45.304 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:45.563 LINK event_perf 00:04:45.563 CXX test/cpp_headers/barrier.o 00:04:45.563 LINK rpc_client_test 00:04:45.563 LINK verify 00:04:45.563 LINK vtophys 00:04:45.563 CXX test/cpp_headers/base64.o 00:04:45.563 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:45.563 CC test/event/reactor/reactor.o 00:04:45.821 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:45.821 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:45.821 LINK mem_callbacks 00:04:45.821 CXX test/cpp_headers/bdev.o 00:04:45.821 CC examples/idxd/perf/perf.o 00:04:45.821 LINK spdk_top 00:04:45.821 LINK reactor 00:04:45.821 LINK nvme_fuzz 00:04:45.821 CC examples/thread/thread/thread_ex.o 00:04:46.081 LINK interrupt_tgt 00:04:46.081 CXX test/cpp_headers/bdev_module.o 00:04:46.081 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:46.081 CC test/event/reactor_perf/reactor_perf.o 00:04:46.081 CC app/spdk_dd/spdk_dd.o 00:04:46.081 CC test/event/app_repeat/app_repeat.o 00:04:46.081 LINK thread 00:04:46.081 LINK idxd_perf 00:04:46.081 LINK env_dpdk_post_init 00:04:46.081 CXX test/cpp_headers/bdev_zone.o 00:04:46.081 LINK vhost_fuzz 00:04:46.081 LINK reactor_perf 00:04:46.340 LINK app_repeat 00:04:46.340 CC test/event/scheduler/scheduler.o 00:04:46.340 CXX test/cpp_headers/bit_array.o 00:04:46.340 CC test/env/memory/memory_ut.o 00:04:46.340 CC test/app/histogram_perf/histogram_perf.o 00:04:46.340 CC test/env/pci/pci_ut.o 00:04:46.340 LINK spdk_dd 00:04:46.600 CC app/fio/nvme/fio_plugin.o 00:04:46.600 LINK scheduler 00:04:46.600 CC examples/sock/hello_world/hello_sock.o 00:04:46.600 CXX test/cpp_headers/bit_pool.o 00:04:46.600 CC app/fio/bdev/fio_plugin.o 00:04:46.600 LINK histogram_perf 00:04:46.600 CXX test/cpp_headers/blob_bdev.o 00:04:46.858 LINK hello_sock 00:04:46.858 CXX test/cpp_headers/blobfs_bdev.o 00:04:46.858 CC app/vhost/vhost.o 00:04:46.858 CC test/accel/dif/dif.o 00:04:46.858 LINK pci_ut 00:04:46.858 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:47.117 CXX test/cpp_headers/blobfs.o 00:04:47.117 LINK vhost 00:04:47.117 LINK spdk_bdev 00:04:47.117 LINK spdk_nvme 00:04:47.117 CC test/blobfs/mkfs/mkfs.o 00:04:47.117 CXX test/cpp_headers/blob.o 00:04:47.117 CXX test/cpp_headers/conf.o 00:04:47.117 LINK hello_fsdev 00:04:47.117 LINK iscsi_fuzz 00:04:47.378 CC test/app/jsoncat/jsoncat.o 00:04:47.378 LINK mkfs 00:04:47.378 CXX test/cpp_headers/config.o 00:04:47.378 CC test/nvme/aer/aer.o 00:04:47.378 CXX test/cpp_headers/cpuset.o 00:04:47.378 CC test/lvol/esnap/esnap.o 00:04:47.378 CC test/nvme/reset/reset.o 00:04:47.378 LINK jsoncat 00:04:47.378 CXX test/cpp_headers/crc16.o 00:04:47.637 CXX test/cpp_headers/crc32.o 00:04:47.637 CC examples/accel/perf/accel_perf.o 00:04:47.637 LINK memory_ut 00:04:47.637 LINK dif 00:04:47.637 CC test/nvme/sgl/sgl.o 00:04:47.637 CC test/app/stub/stub.o 00:04:47.637 LINK aer 00:04:47.637 LINK reset 00:04:47.637 CC test/nvme/e2edp/nvme_dp.o 00:04:47.637 CXX test/cpp_headers/crc64.o 00:04:47.897 LINK stub 00:04:47.897 CXX test/cpp_headers/dif.o 00:04:47.897 CXX test/cpp_headers/dma.o 00:04:47.897 CC test/nvme/overhead/overhead.o 00:04:47.897 LINK sgl 00:04:47.897 CC test/nvme/err_injection/err_injection.o 00:04:47.897 CXX test/cpp_headers/endian.o 00:04:47.897 LINK nvme_dp 00:04:48.156 CC test/bdev/bdevio/bdevio.o 00:04:48.156 CC test/nvme/startup/startup.o 00:04:48.156 LINK accel_perf 00:04:48.156 CXX test/cpp_headers/env_dpdk.o 00:04:48.156 CC test/nvme/reserve/reserve.o 00:04:48.156 LINK err_injection 00:04:48.156 CC test/nvme/simple_copy/simple_copy.o 00:04:48.156 LINK overhead 00:04:48.156 LINK startup 00:04:48.415 CXX test/cpp_headers/env.o 00:04:48.415 CXX test/cpp_headers/event.o 00:04:48.415 LINK reserve 00:04:48.415 CC examples/blob/hello_world/hello_blob.o 00:04:48.415 LINK bdevio 00:04:48.415 LINK simple_copy 00:04:48.415 CC examples/nvme/hello_world/hello_world.o 00:04:48.415 CXX test/cpp_headers/fd_group.o 00:04:48.415 CXX test/cpp_headers/fd.o 00:04:48.415 CC examples/nvme/reconnect/reconnect.o 00:04:48.415 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:48.675 CC examples/nvme/arbitration/arbitration.o 00:04:48.675 LINK hello_blob 00:04:48.675 CXX test/cpp_headers/file.o 00:04:48.675 CXX test/cpp_headers/fsdev.o 00:04:48.675 LINK hello_world 00:04:48.675 CC test/nvme/connect_stress/connect_stress.o 00:04:48.675 CC examples/nvme/hotplug/hotplug.o 00:04:48.933 LINK reconnect 00:04:48.933 CXX test/cpp_headers/fsdev_module.o 00:04:48.933 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:48.933 LINK arbitration 00:04:48.933 LINK connect_stress 00:04:48.933 CC examples/blob/cli/blobcli.o 00:04:48.933 CC examples/nvme/abort/abort.o 00:04:48.933 LINK hotplug 00:04:48.933 CXX test/cpp_headers/ftl.o 00:04:49.193 LINK nvme_manage 00:04:49.193 CXX test/cpp_headers/fuse_dispatcher.o 00:04:49.193 LINK cmb_copy 00:04:49.193 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:49.193 CXX test/cpp_headers/gpt_spec.o 00:04:49.193 CC test/nvme/boot_partition/boot_partition.o 00:04:49.193 CXX test/cpp_headers/hexlify.o 00:04:49.193 LINK pmr_persistence 00:04:49.452 CC test/nvme/compliance/nvme_compliance.o 00:04:49.452 CC test/nvme/fused_ordering/fused_ordering.o 00:04:49.452 LINK boot_partition 00:04:49.452 LINK abort 00:04:49.452 CXX test/cpp_headers/histogram_data.o 00:04:49.452 CXX test/cpp_headers/idxd.o 00:04:49.452 LINK blobcli 00:04:49.452 CC examples/bdev/hello_world/hello_bdev.o 00:04:49.452 CC examples/bdev/bdevperf/bdevperf.o 00:04:49.452 CXX test/cpp_headers/idxd_spec.o 00:04:49.711 LINK fused_ordering 00:04:49.711 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:49.711 CXX test/cpp_headers/init.o 00:04:49.711 CC test/nvme/fdp/fdp.o 00:04:49.711 CC test/nvme/cuse/cuse.o 00:04:49.711 CXX test/cpp_headers/ioat.o 00:04:49.711 LINK nvme_compliance 00:04:49.711 LINK hello_bdev 00:04:49.711 CXX test/cpp_headers/ioat_spec.o 00:04:49.711 LINK doorbell_aers 00:04:49.711 CXX test/cpp_headers/iscsi_spec.o 00:04:49.971 CXX test/cpp_headers/json.o 00:04:49.971 CXX test/cpp_headers/jsonrpc.o 00:04:49.971 CXX test/cpp_headers/keyring.o 00:04:49.971 CXX test/cpp_headers/keyring_module.o 00:04:49.971 CXX test/cpp_headers/likely.o 00:04:49.971 CXX test/cpp_headers/log.o 00:04:49.971 LINK fdp 00:04:49.971 CXX test/cpp_headers/lvol.o 00:04:49.971 CXX test/cpp_headers/md5.o 00:04:49.971 CXX test/cpp_headers/memory.o 00:04:50.230 CXX test/cpp_headers/mmio.o 00:04:50.230 CXX test/cpp_headers/nbd.o 00:04:50.230 CXX test/cpp_headers/net.o 00:04:50.230 CXX test/cpp_headers/notify.o 00:04:50.230 CXX test/cpp_headers/nvme.o 00:04:50.230 CXX test/cpp_headers/nvme_intel.o 00:04:50.230 CXX test/cpp_headers/nvme_ocssd.o 00:04:50.230 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:50.230 CXX test/cpp_headers/nvme_spec.o 00:04:50.230 CXX test/cpp_headers/nvme_zns.o 00:04:50.230 CXX test/cpp_headers/nvmf_cmd.o 00:04:50.489 LINK bdevperf 00:04:50.489 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:50.489 CXX test/cpp_headers/nvmf.o 00:04:50.489 CXX test/cpp_headers/nvmf_spec.o 00:04:50.489 CXX test/cpp_headers/nvmf_transport.o 00:04:50.489 CXX test/cpp_headers/opal.o 00:04:50.489 CXX test/cpp_headers/opal_spec.o 00:04:50.489 CXX test/cpp_headers/pci_ids.o 00:04:50.489 CXX test/cpp_headers/pipe.o 00:04:50.489 CXX test/cpp_headers/queue.o 00:04:50.489 CXX test/cpp_headers/reduce.o 00:04:50.748 CXX test/cpp_headers/rpc.o 00:04:50.748 CXX test/cpp_headers/scheduler.o 00:04:50.748 CXX test/cpp_headers/scsi.o 00:04:50.748 CXX test/cpp_headers/scsi_spec.o 00:04:50.748 CXX test/cpp_headers/sock.o 00:04:50.748 CXX test/cpp_headers/stdinc.o 00:04:50.748 CXX test/cpp_headers/string.o 00:04:50.748 CC examples/nvmf/nvmf/nvmf.o 00:04:50.748 CXX test/cpp_headers/thread.o 00:04:50.748 CXX test/cpp_headers/trace.o 00:04:50.748 CXX test/cpp_headers/trace_parser.o 00:04:50.748 CXX test/cpp_headers/tree.o 00:04:51.006 CXX test/cpp_headers/ublk.o 00:04:51.006 CXX test/cpp_headers/util.o 00:04:51.006 CXX test/cpp_headers/uuid.o 00:04:51.006 CXX test/cpp_headers/version.o 00:04:51.006 CXX test/cpp_headers/vfio_user_pci.o 00:04:51.006 CXX test/cpp_headers/vfio_user_spec.o 00:04:51.006 CXX test/cpp_headers/vhost.o 00:04:51.006 CXX test/cpp_headers/vmd.o 00:04:51.006 LINK cuse 00:04:51.006 CXX test/cpp_headers/xor.o 00:04:51.006 CXX test/cpp_headers/zipf.o 00:04:51.006 LINK nvmf 00:04:53.579 LINK esnap 00:04:53.838 00:04:53.838 real 1m29.398s 00:04:53.838 user 7m51.556s 00:04:53.838 sys 1m59.800s 00:04:53.838 15:59:12 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:53.838 15:59:12 make -- common/autotest_common.sh@10 -- $ set +x 00:04:53.838 ************************************ 00:04:53.838 END TEST make 00:04:53.838 ************************************ 00:04:53.838 15:59:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:53.838 15:59:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:53.838 15:59:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:53.838 15:59:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.838 15:59:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:53.838 15:59:12 -- pm/common@44 -- $ pid=5285 00:04:53.838 15:59:12 -- pm/common@50 -- $ kill -TERM 5285 00:04:53.838 15:59:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.838 15:59:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:53.838 15:59:12 -- pm/common@44 -- $ pid=5287 00:04:53.838 15:59:12 -- pm/common@50 -- $ kill -TERM 5287 00:04:53.838 15:59:12 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:53.838 15:59:12 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:53.838 15:59:12 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:53.838 15:59:12 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:53.838 15:59:12 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:54.098 15:59:12 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:54.098 15:59:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.098 15:59:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.098 15:59:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.098 15:59:12 -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.098 15:59:12 -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.098 15:59:12 -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.098 15:59:12 -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.098 15:59:12 -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.098 15:59:12 -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.098 15:59:12 -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.098 15:59:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.098 15:59:12 -- scripts/common.sh@344 -- # case "$op" in 00:04:54.098 15:59:12 -- scripts/common.sh@345 -- # : 1 00:04:54.098 15:59:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.098 15:59:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.098 15:59:12 -- scripts/common.sh@365 -- # decimal 1 00:04:54.098 15:59:12 -- scripts/common.sh@353 -- # local d=1 00:04:54.098 15:59:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.098 15:59:12 -- scripts/common.sh@355 -- # echo 1 00:04:54.098 15:59:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.098 15:59:12 -- scripts/common.sh@366 -- # decimal 2 00:04:54.098 15:59:12 -- scripts/common.sh@353 -- # local d=2 00:04:54.098 15:59:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.098 15:59:12 -- scripts/common.sh@355 -- # echo 2 00:04:54.098 15:59:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.098 15:59:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.098 15:59:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.098 15:59:12 -- scripts/common.sh@368 -- # return 0 00:04:54.098 15:59:12 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.098 15:59:12 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:54.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.098 --rc genhtml_branch_coverage=1 00:04:54.098 --rc genhtml_function_coverage=1 00:04:54.098 --rc genhtml_legend=1 00:04:54.098 --rc geninfo_all_blocks=1 00:04:54.098 --rc geninfo_unexecuted_blocks=1 00:04:54.098 00:04:54.098 ' 00:04:54.098 15:59:12 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:54.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.098 --rc genhtml_branch_coverage=1 00:04:54.098 --rc genhtml_function_coverage=1 00:04:54.098 --rc genhtml_legend=1 00:04:54.098 --rc geninfo_all_blocks=1 00:04:54.098 --rc geninfo_unexecuted_blocks=1 00:04:54.098 00:04:54.098 ' 00:04:54.098 15:59:12 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:54.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.098 --rc genhtml_branch_coverage=1 00:04:54.098 --rc genhtml_function_coverage=1 00:04:54.098 --rc genhtml_legend=1 00:04:54.098 --rc geninfo_all_blocks=1 00:04:54.098 --rc geninfo_unexecuted_blocks=1 00:04:54.098 00:04:54.098 ' 00:04:54.098 15:59:12 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:54.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.098 --rc genhtml_branch_coverage=1 00:04:54.098 --rc genhtml_function_coverage=1 00:04:54.098 --rc genhtml_legend=1 00:04:54.098 --rc geninfo_all_blocks=1 00:04:54.098 --rc geninfo_unexecuted_blocks=1 00:04:54.098 00:04:54.098 ' 00:04:54.098 15:59:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:54.098 15:59:12 -- nvmf/common.sh@7 -- # uname -s 00:04:54.098 15:59:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.098 15:59:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.098 15:59:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.098 15:59:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.098 15:59:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.098 15:59:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.098 15:59:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.098 15:59:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.098 15:59:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.098 15:59:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.098 15:59:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b307c85-9e07-4f18-80b6-51adc42f99df 00:04:54.098 15:59:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=2b307c85-9e07-4f18-80b6-51adc42f99df 00:04:54.098 15:59:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.098 15:59:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.098 15:59:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:54.098 15:59:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.098 15:59:12 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:54.098 15:59:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.098 15:59:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.098 15:59:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.098 15:59:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.098 15:59:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.098 15:59:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.098 15:59:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.098 15:59:12 -- paths/export.sh@5 -- # export PATH 00:04:54.098 15:59:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.098 15:59:12 -- nvmf/common.sh@51 -- # : 0 00:04:54.098 15:59:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.098 15:59:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.098 15:59:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.098 15:59:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.098 15:59:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.098 15:59:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.098 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.098 15:59:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.098 15:59:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.098 15:59:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.098 15:59:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:54.098 15:59:12 -- spdk/autotest.sh@32 -- # uname -s 00:04:54.098 15:59:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:54.098 15:59:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:54.098 15:59:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:54.099 15:59:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:54.099 15:59:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:54.099 15:59:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:54.099 15:59:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:54.099 15:59:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:54.099 15:59:12 -- spdk/autotest.sh@48 -- # udevadm_pid=54790 00:04:54.099 15:59:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:54.099 15:59:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:54.099 15:59:12 -- pm/common@17 -- # local monitor 00:04:54.099 15:59:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:54.099 15:59:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:54.099 15:59:12 -- pm/common@21 -- # date +%s 00:04:54.099 15:59:12 -- pm/common@25 -- # sleep 1 00:04:54.099 15:59:12 -- pm/common@21 -- # date +%s 00:04:54.099 15:59:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730735952 00:04:54.099 15:59:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730735952 00:04:54.099 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730735952_collect-vmstat.pm.log 00:04:54.099 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730735952_collect-cpu-load.pm.log 00:04:55.036 15:59:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:55.036 15:59:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:55.036 15:59:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.036 15:59:13 -- common/autotest_common.sh@10 -- # set +x 00:04:55.036 15:59:13 -- spdk/autotest.sh@59 -- # create_test_list 00:04:55.036 15:59:13 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:55.036 15:59:13 -- common/autotest_common.sh@10 -- # set +x 00:04:55.295 15:59:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:55.295 15:59:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:55.295 15:59:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:55.295 15:59:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:55.295 15:59:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:55.295 15:59:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:55.295 15:59:13 -- common/autotest_common.sh@1455 -- # uname 00:04:55.295 15:59:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:55.295 15:59:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:55.295 15:59:13 -- common/autotest_common.sh@1475 -- # uname 00:04:55.295 15:59:13 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:55.295 15:59:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:55.295 15:59:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:55.295 lcov: LCOV version 1.15 00:04:55.296 15:59:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:13.391 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:13.391 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:28.285 15:59:44 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:28.285 15:59:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.285 15:59:44 -- common/autotest_common.sh@10 -- # set +x 00:05:28.285 15:59:44 -- spdk/autotest.sh@78 -- # rm -f 00:05:28.285 15:59:44 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:28.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.285 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:28.285 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:28.285 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:28.285 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:28.285 15:59:46 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:28.285 15:59:46 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:28.285 15:59:46 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:28.285 15:59:46 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:28.285 15:59:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:28.285 15:59:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:28.285 15:59:46 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:28.285 15:59:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:28.285 15:59:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:28.285 15:59:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:28.285 15:59:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:28.285 15:59:46 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:28.285 15:59:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:28.285 15:59:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:28.285 15:59:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:28.285 15:59:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:28.285 15:59:46 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:28.285 15:59:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:28.285 15:59:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:28.285 15:59:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:28.285 15:59:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:28.285 15:59:46 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:28.285 15:59:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:28.285 15:59:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:28.285 15:59:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:28.285 15:59:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:28.285 15:59:46 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:28.285 15:59:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:28.285 15:59:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:28.286 15:59:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:28.286 15:59:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:28.286 15:59:46 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:28.286 15:59:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:28.286 15:59:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:28.286 15:59:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:28.286 15:59:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:28.286 15:59:46 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:28.286 15:59:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:28.286 15:59:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:28.286 15:59:46 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:28.286 15:59:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.286 15:59:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:28.286 15:59:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:28.286 15:59:46 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:28.286 15:59:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:28.286 No valid GPT data, bailing 00:05:28.286 15:59:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:28.286 15:59:46 -- scripts/common.sh@394 -- # pt= 00:05:28.286 15:59:46 -- scripts/common.sh@395 -- # return 1 00:05:28.286 15:59:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:28.286 1+0 records in 00:05:28.286 1+0 records out 00:05:28.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171033 s, 61.3 MB/s 00:05:28.286 15:59:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.286 15:59:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:28.286 15:59:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:28.286 15:59:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:28.286 15:59:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:28.286 No valid GPT data, bailing 00:05:28.286 15:59:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:28.286 15:59:46 -- scripts/common.sh@394 -- # pt= 00:05:28.286 15:59:46 -- scripts/common.sh@395 -- # return 1 00:05:28.286 15:59:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:28.286 1+0 records in 00:05:28.286 1+0 records out 00:05:28.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00623134 s, 168 MB/s 00:05:28.286 15:59:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.286 15:59:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:28.286 15:59:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:28.286 15:59:46 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:28.286 15:59:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:28.286 No valid GPT data, bailing 00:05:28.286 15:59:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:28.286 15:59:46 -- scripts/common.sh@394 -- # pt= 00:05:28.286 15:59:46 -- scripts/common.sh@395 -- # return 1 00:05:28.286 15:59:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:28.286 1+0 records in 00:05:28.286 1+0 records out 00:05:28.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00906154 s, 116 MB/s 00:05:28.286 15:59:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.286 15:59:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:28.286 15:59:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:28.286 15:59:46 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:28.286 15:59:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:28.286 No valid GPT data, bailing 00:05:28.286 15:59:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:28.286 15:59:46 -- scripts/common.sh@394 -- # pt= 00:05:28.286 15:59:46 -- scripts/common.sh@395 -- # return 1 00:05:28.286 15:59:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:28.286 1+0 records in 00:05:28.286 1+0 records out 00:05:28.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619198 s, 169 MB/s 00:05:28.286 15:59:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.286 15:59:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:28.286 15:59:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:28.286 15:59:46 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:28.286 15:59:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:28.286 No valid GPT data, bailing 00:05:28.286 15:59:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:28.286 15:59:46 -- scripts/common.sh@394 -- # pt= 00:05:28.286 15:59:46 -- scripts/common.sh@395 -- # return 1 00:05:28.286 15:59:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:28.286 1+0 records in 00:05:28.286 1+0 records out 00:05:28.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00622677 s, 168 MB/s 00:05:28.286 15:59:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.286 15:59:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:28.286 15:59:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:28.286 15:59:46 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:28.286 15:59:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:28.286 No valid GPT data, bailing 00:05:28.286 15:59:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:28.286 15:59:46 -- scripts/common.sh@394 -- # pt= 00:05:28.286 15:59:46 -- scripts/common.sh@395 -- # return 1 00:05:28.286 15:59:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:28.286 1+0 records in 00:05:28.286 1+0 records out 00:05:28.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0073168 s, 143 MB/s 00:05:28.286 15:59:46 -- spdk/autotest.sh@105 -- # sync 00:05:28.286 15:59:46 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:28.286 15:59:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:28.286 15:59:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:31.573 15:59:49 -- spdk/autotest.sh@111 -- # uname -s 00:05:31.573 15:59:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:31.573 15:59:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:31.573 15:59:49 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:31.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.402 Hugepages 00:05:32.402 node hugesize free / total 00:05:32.402 node0 1048576kB 0 / 0 00:05:32.402 node0 2048kB 0 / 0 00:05:32.402 00:05:32.402 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:32.660 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:32.660 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:32.919 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:32.919 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:33.178 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:33.178 15:59:51 -- spdk/autotest.sh@117 -- # uname -s 00:05:33.178 15:59:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:33.178 15:59:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:33.178 15:59:51 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.313 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.573 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.573 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.573 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.573 15:59:53 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:35.950 15:59:54 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:35.950 15:59:54 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:35.950 15:59:54 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:35.950 15:59:54 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:35.950 15:59:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:35.950 15:59:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:35.950 15:59:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.950 15:59:54 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:35.951 15:59:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:35.951 15:59:54 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:35.951 15:59:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:35.951 15:59:54 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.518 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.777 Waiting for block devices as requested 00:05:36.777 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.777 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.036 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.036 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:42.310 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:42.310 16:00:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.310 16:00:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:42.310 16:00:00 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:42.310 16:00:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.310 16:00:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.310 16:00:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:42.310 16:00:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:42.310 16:00:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:42.310 16:00:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.310 16:00:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.310 16:00:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.310 16:00:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1541 -- # continue 00:05:42.310 16:00:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.310 16:00:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:42.310 16:00:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.310 16:00:00 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:42.310 16:00:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.310 16:00:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:42.310 16:00:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:42.310 16:00:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:42.310 16:00:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.310 16:00:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.310 16:00:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.310 16:00:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1541 -- # continue 00:05:42.310 16:00:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.310 16:00:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:42.310 16:00:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.310 16:00:00 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:42.310 16:00:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.310 16:00:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.310 16:00:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.310 16:00:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1541 -- # continue 00:05:42.310 16:00:00 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:42.310 16:00:00 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:42.310 16:00:00 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:42.310 16:00:00 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:42.310 16:00:00 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:42.310 16:00:00 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:42.310 16:00:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:42.310 16:00:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:42.310 16:00:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:42.310 16:00:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:42.310 16:00:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:42.310 16:00:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:42.310 16:00:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:42.310 16:00:00 -- common/autotest_common.sh@1541 -- # continue 00:05:42.310 16:00:00 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:42.310 16:00:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.310 16:00:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.310 16:00:00 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:42.310 16:00:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.310 16:00:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.310 16:00:01 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.813 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.813 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.813 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.071 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:44.071 16:00:02 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:44.071 16:00:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.071 16:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:44.072 16:00:02 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:44.072 16:00:02 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:44.072 16:00:02 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:44.072 16:00:02 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:44.072 16:00:02 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:44.072 16:00:02 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:44.072 16:00:02 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:44.072 16:00:02 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:44.072 16:00:02 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:44.072 16:00:02 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:44.072 16:00:02 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.072 16:00:02 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:44.072 16:00:02 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:44.330 16:00:02 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:44.330 16:00:02 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:44.330 16:00:02 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:44.330 16:00:02 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:44.330 16:00:02 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:44.330 16:00:02 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.330 16:00:02 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:44.331 16:00:02 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:44.331 16:00:02 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:44.331 16:00:02 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.331 16:00:02 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:44.331 16:00:02 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:44.331 16:00:02 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:44.331 16:00:02 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.331 16:00:02 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:44.331 16:00:02 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:44.331 16:00:02 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:44.331 16:00:02 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:44.331 16:00:02 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:44.331 16:00:02 -- common/autotest_common.sh@1570 -- # return 0 00:05:44.331 16:00:02 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:44.331 16:00:02 -- common/autotest_common.sh@1578 -- # return 0 00:05:44.331 16:00:02 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:44.331 16:00:02 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:44.331 16:00:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:44.331 16:00:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:44.331 16:00:02 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:44.331 16:00:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:44.331 16:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:44.331 16:00:02 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:44.331 16:00:02 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:44.331 16:00:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.331 16:00:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.331 16:00:02 -- common/autotest_common.sh@10 -- # set +x 00:05:44.331 ************************************ 00:05:44.331 START TEST env 00:05:44.331 ************************************ 00:05:44.331 16:00:02 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:44.590 * Looking for test storage... 00:05:44.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:44.590 16:00:03 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.590 16:00:03 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.590 16:00:03 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.590 16:00:03 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.590 16:00:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.590 16:00:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.590 16:00:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.590 16:00:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.590 16:00:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.590 16:00:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.590 16:00:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.590 16:00:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.590 16:00:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.590 16:00:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.590 16:00:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.590 16:00:03 env -- scripts/common.sh@344 -- # case "$op" in 00:05:44.590 16:00:03 env -- scripts/common.sh@345 -- # : 1 00:05:44.590 16:00:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.590 16:00:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.590 16:00:03 env -- scripts/common.sh@365 -- # decimal 1 00:05:44.590 16:00:03 env -- scripts/common.sh@353 -- # local d=1 00:05:44.590 16:00:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.590 16:00:03 env -- scripts/common.sh@355 -- # echo 1 00:05:44.590 16:00:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.590 16:00:03 env -- scripts/common.sh@366 -- # decimal 2 00:05:44.590 16:00:03 env -- scripts/common.sh@353 -- # local d=2 00:05:44.590 16:00:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.590 16:00:03 env -- scripts/common.sh@355 -- # echo 2 00:05:44.590 16:00:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.590 16:00:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.590 16:00:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.590 16:00:03 env -- scripts/common.sh@368 -- # return 0 00:05:44.590 16:00:03 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.590 16:00:03 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.590 --rc genhtml_branch_coverage=1 00:05:44.590 --rc genhtml_function_coverage=1 00:05:44.590 --rc genhtml_legend=1 00:05:44.590 --rc geninfo_all_blocks=1 00:05:44.590 --rc geninfo_unexecuted_blocks=1 00:05:44.590 00:05:44.590 ' 00:05:44.590 16:00:03 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.590 --rc genhtml_branch_coverage=1 00:05:44.590 --rc genhtml_function_coverage=1 00:05:44.590 --rc genhtml_legend=1 00:05:44.590 --rc geninfo_all_blocks=1 00:05:44.590 --rc geninfo_unexecuted_blocks=1 00:05:44.590 00:05:44.590 ' 00:05:44.590 16:00:03 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.590 --rc genhtml_branch_coverage=1 00:05:44.590 --rc genhtml_function_coverage=1 00:05:44.590 --rc genhtml_legend=1 00:05:44.590 --rc geninfo_all_blocks=1 00:05:44.590 --rc geninfo_unexecuted_blocks=1 00:05:44.590 00:05:44.590 ' 00:05:44.590 16:00:03 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.590 --rc genhtml_branch_coverage=1 00:05:44.590 --rc genhtml_function_coverage=1 00:05:44.590 --rc genhtml_legend=1 00:05:44.590 --rc geninfo_all_blocks=1 00:05:44.590 --rc geninfo_unexecuted_blocks=1 00:05:44.590 00:05:44.590 ' 00:05:44.590 16:00:03 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:44.590 16:00:03 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.590 16:00:03 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.590 16:00:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.590 ************************************ 00:05:44.590 START TEST env_memory 00:05:44.590 ************************************ 00:05:44.590 16:00:03 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:44.590 00:05:44.590 00:05:44.590 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.590 http://cunit.sourceforge.net/ 00:05:44.590 00:05:44.590 00:05:44.590 Suite: memory 00:05:44.590 Test: alloc and free memory map ...[2024-11-04 16:00:03.255063] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:44.590 passed 00:05:44.590 Test: mem map translation ...[2024-11-04 16:00:03.300694] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:44.590 [2024-11-04 16:00:03.300771] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:44.590 [2024-11-04 16:00:03.300848] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:44.590 [2024-11-04 16:00:03.300877] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:44.849 passed 00:05:44.849 Test: mem map registration ...[2024-11-04 16:00:03.370779] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:44.849 [2024-11-04 16:00:03.370858] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:44.849 passed 00:05:44.849 Test: mem map adjacent registrations ...passed 00:05:44.849 00:05:44.849 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.849 suites 1 1 n/a 0 0 00:05:44.849 tests 4 4 4 0 0 00:05:44.849 asserts 152 152 152 0 n/a 00:05:44.849 00:05:44.849 Elapsed time = 0.252 seconds 00:05:44.849 00:05:44.849 real 0m0.308s 00:05:44.849 user 0m0.258s 00:05:44.849 sys 0m0.041s 00:05:44.849 16:00:03 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.849 16:00:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:44.849 ************************************ 00:05:44.849 END TEST env_memory 00:05:44.849 ************************************ 00:05:44.849 16:00:03 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:44.849 16:00:03 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.849 16:00:03 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.849 16:00:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.849 ************************************ 00:05:44.849 START TEST env_vtophys 00:05:44.849 ************************************ 00:05:44.849 16:00:03 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:45.108 EAL: lib.eal log level changed from notice to debug 00:05:45.108 EAL: Detected lcore 0 as core 0 on socket 0 00:05:45.108 EAL: Detected lcore 1 as core 0 on socket 0 00:05:45.108 EAL: Detected lcore 2 as core 0 on socket 0 00:05:45.108 EAL: Detected lcore 3 as core 0 on socket 0 00:05:45.108 EAL: Detected lcore 4 as core 0 on socket 0 00:05:45.108 EAL: Detected lcore 5 as core 0 on socket 0 00:05:45.108 EAL: Detected lcore 6 as core 0 on socket 0 00:05:45.108 EAL: Detected lcore 7 as core 0 on socket 0 00:05:45.108 EAL: Detected lcore 8 as core 0 on socket 0 00:05:45.108 EAL: Detected lcore 9 as core 0 on socket 0 00:05:45.108 EAL: Maximum logical cores by configuration: 128 00:05:45.108 EAL: Detected CPU lcores: 10 00:05:45.108 EAL: Detected NUMA nodes: 1 00:05:45.108 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:45.108 EAL: Detected shared linkage of DPDK 00:05:45.108 EAL: No shared files mode enabled, IPC will be disabled 00:05:45.108 EAL: Selected IOVA mode 'PA' 00:05:45.108 EAL: Probing VFIO support... 00:05:45.108 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:45.108 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:45.108 EAL: Ask a virtual area of 0x2e000 bytes 00:05:45.108 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:45.108 EAL: Setting up physically contiguous memory... 00:05:45.108 EAL: Setting maximum number of open files to 524288 00:05:45.108 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:45.108 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:45.108 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.108 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:45.108 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.108 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.108 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:45.108 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:45.108 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.109 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:45.109 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.109 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.109 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:45.109 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:45.109 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.109 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:45.109 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.109 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.109 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:45.109 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:45.109 EAL: Ask a virtual area of 0x61000 bytes 00:05:45.109 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:45.109 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:45.109 EAL: Ask a virtual area of 0x400000000 bytes 00:05:45.109 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:45.109 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:45.109 EAL: Hugepages will be freed exactly as allocated. 00:05:45.109 EAL: No shared files mode enabled, IPC is disabled 00:05:45.109 EAL: No shared files mode enabled, IPC is disabled 00:05:45.109 EAL: TSC frequency is ~2490000 KHz 00:05:45.109 EAL: Main lcore 0 is ready (tid=7faeb2ad4a40;cpuset=[0]) 00:05:45.109 EAL: Trying to obtain current memory policy. 00:05:45.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.109 EAL: Restoring previous memory policy: 0 00:05:45.109 EAL: request: mp_malloc_sync 00:05:45.109 EAL: No shared files mode enabled, IPC is disabled 00:05:45.109 EAL: Heap on socket 0 was expanded by 2MB 00:05:45.109 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:45.109 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:45.109 EAL: Mem event callback 'spdk:(nil)' registered 00:05:45.109 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:45.109 00:05:45.109 00:05:45.109 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.109 http://cunit.sourceforge.net/ 00:05:45.109 00:05:45.109 00:05:45.109 Suite: components_suite 00:05:45.676 Test: vtophys_malloc_test ...passed 00:05:45.676 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:45.676 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.676 EAL: Restoring previous memory policy: 4 00:05:45.676 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.676 EAL: request: mp_malloc_sync 00:05:45.676 EAL: No shared files mode enabled, IPC is disabled 00:05:45.676 EAL: Heap on socket 0 was expanded by 4MB 00:05:45.676 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.676 EAL: request: mp_malloc_sync 00:05:45.676 EAL: No shared files mode enabled, IPC is disabled 00:05:45.676 EAL: Heap on socket 0 was shrunk by 4MB 00:05:45.676 EAL: Trying to obtain current memory policy. 00:05:45.676 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.676 EAL: Restoring previous memory policy: 4 00:05:45.676 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.676 EAL: request: mp_malloc_sync 00:05:45.676 EAL: No shared files mode enabled, IPC is disabled 00:05:45.676 EAL: Heap on socket 0 was expanded by 6MB 00:05:45.676 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.676 EAL: request: mp_malloc_sync 00:05:45.676 EAL: No shared files mode enabled, IPC is disabled 00:05:45.676 EAL: Heap on socket 0 was shrunk by 6MB 00:05:45.676 EAL: Trying to obtain current memory policy. 00:05:45.676 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.676 EAL: Restoring previous memory policy: 4 00:05:45.676 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.676 EAL: request: mp_malloc_sync 00:05:45.676 EAL: No shared files mode enabled, IPC is disabled 00:05:45.676 EAL: Heap on socket 0 was expanded by 10MB 00:05:45.676 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.676 EAL: request: mp_malloc_sync 00:05:45.676 EAL: No shared files mode enabled, IPC is disabled 00:05:45.676 EAL: Heap on socket 0 was shrunk by 10MB 00:05:45.676 EAL: Trying to obtain current memory policy. 00:05:45.676 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.676 EAL: Restoring previous memory policy: 4 00:05:45.676 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.676 EAL: request: mp_malloc_sync 00:05:45.676 EAL: No shared files mode enabled, IPC is disabled 00:05:45.676 EAL: Heap on socket 0 was expanded by 18MB 00:05:45.676 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.676 EAL: request: mp_malloc_sync 00:05:45.676 EAL: No shared files mode enabled, IPC is disabled 00:05:45.676 EAL: Heap on socket 0 was shrunk by 18MB 00:05:45.676 EAL: Trying to obtain current memory policy. 00:05:45.676 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.676 EAL: Restoring previous memory policy: 4 00:05:45.676 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.676 EAL: request: mp_malloc_sync 00:05:45.676 EAL: No shared files mode enabled, IPC is disabled 00:05:45.676 EAL: Heap on socket 0 was expanded by 34MB 00:05:45.936 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.936 EAL: request: mp_malloc_sync 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Heap on socket 0 was shrunk by 34MB 00:05:45.936 EAL: Trying to obtain current memory policy. 00:05:45.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:45.936 EAL: Restoring previous memory policy: 4 00:05:45.936 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.936 EAL: request: mp_malloc_sync 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Heap on socket 0 was expanded by 66MB 00:05:45.936 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.936 EAL: request: mp_malloc_sync 00:05:45.936 EAL: No shared files mode enabled, IPC is disabled 00:05:45.936 EAL: Heap on socket 0 was shrunk by 66MB 00:05:46.197 EAL: Trying to obtain current memory policy. 00:05:46.197 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.197 EAL: Restoring previous memory policy: 4 00:05:46.197 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.197 EAL: request: mp_malloc_sync 00:05:46.197 EAL: No shared files mode enabled, IPC is disabled 00:05:46.197 EAL: Heap on socket 0 was expanded by 130MB 00:05:46.455 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.455 EAL: request: mp_malloc_sync 00:05:46.455 EAL: No shared files mode enabled, IPC is disabled 00:05:46.455 EAL: Heap on socket 0 was shrunk by 130MB 00:05:46.715 EAL: Trying to obtain current memory policy. 00:05:46.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:46.715 EAL: Restoring previous memory policy: 4 00:05:46.715 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.715 EAL: request: mp_malloc_sync 00:05:46.715 EAL: No shared files mode enabled, IPC is disabled 00:05:46.715 EAL: Heap on socket 0 was expanded by 258MB 00:05:47.282 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.282 EAL: request: mp_malloc_sync 00:05:47.282 EAL: No shared files mode enabled, IPC is disabled 00:05:47.282 EAL: Heap on socket 0 was shrunk by 258MB 00:05:47.541 EAL: Trying to obtain current memory policy. 00:05:47.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.799 EAL: Restoring previous memory policy: 4 00:05:47.799 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.799 EAL: request: mp_malloc_sync 00:05:47.799 EAL: No shared files mode enabled, IPC is disabled 00:05:47.799 EAL: Heap on socket 0 was expanded by 514MB 00:05:48.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.736 EAL: request: mp_malloc_sync 00:05:48.736 EAL: No shared files mode enabled, IPC is disabled 00:05:48.736 EAL: Heap on socket 0 was shrunk by 514MB 00:05:49.672 EAL: Trying to obtain current memory policy. 00:05:49.672 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.932 EAL: Restoring previous memory policy: 4 00:05:49.932 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.932 EAL: request: mp_malloc_sync 00:05:49.932 EAL: No shared files mode enabled, IPC is disabled 00:05:49.932 EAL: Heap on socket 0 was expanded by 1026MB 00:05:51.841 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.841 EAL: request: mp_malloc_sync 00:05:51.841 EAL: No shared files mode enabled, IPC is disabled 00:05:51.841 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:53.253 passed 00:05:53.253 00:05:53.253 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.253 suites 1 1 n/a 0 0 00:05:53.253 tests 2 2 2 0 0 00:05:53.253 asserts 5761 5761 5761 0 n/a 00:05:53.253 00:05:53.253 Elapsed time = 8.101 seconds 00:05:53.253 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.253 EAL: request: mp_malloc_sync 00:05:53.253 EAL: No shared files mode enabled, IPC is disabled 00:05:53.253 EAL: Heap on socket 0 was shrunk by 2MB 00:05:53.253 EAL: No shared files mode enabled, IPC is disabled 00:05:53.253 EAL: No shared files mode enabled, IPC is disabled 00:05:53.253 EAL: No shared files mode enabled, IPC is disabled 00:05:53.512 00:05:53.512 real 0m8.447s 00:05:53.512 user 0m7.369s 00:05:53.512 sys 0m0.911s 00:05:53.512 16:00:12 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.512 ************************************ 00:05:53.512 END TEST env_vtophys 00:05:53.512 ************************************ 00:05:53.512 16:00:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:53.512 16:00:12 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:53.512 16:00:12 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:53.512 16:00:12 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.512 16:00:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.512 ************************************ 00:05:53.512 START TEST env_pci 00:05:53.512 ************************************ 00:05:53.512 16:00:12 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:53.512 00:05:53.512 00:05:53.512 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.512 http://cunit.sourceforge.net/ 00:05:53.512 00:05:53.512 00:05:53.512 Suite: pci 00:05:53.512 Test: pci_hook ...[2024-11-04 16:00:12.120692] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57656 has claimed it 00:05:53.512 passed 00:05:53.512 00:05:53.512 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.512 suites 1 1 n/a 0 0 00:05:53.512 tests 1 1 1 0 0 00:05:53.512 asserts 25 25 25 0 n/a 00:05:53.512 00:05:53.512 Elapsed time = 0.007 seconds 00:05:53.512 EAL: Cannot find device (10000:00:01.0) 00:05:53.512 EAL: Failed to attach device on primary process 00:05:53.512 00:05:53.512 real 0m0.110s 00:05:53.512 user 0m0.040s 00:05:53.512 sys 0m0.069s 00:05:53.512 16:00:12 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.512 ************************************ 00:05:53.512 END TEST env_pci 00:05:53.512 ************************************ 00:05:53.512 16:00:12 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:53.771 16:00:12 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:53.771 16:00:12 env -- env/env.sh@15 -- # uname 00:05:53.771 16:00:12 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:53.771 16:00:12 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:53.771 16:00:12 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.771 16:00:12 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:53.771 16:00:12 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.771 16:00:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.771 ************************************ 00:05:53.771 START TEST env_dpdk_post_init 00:05:53.771 ************************************ 00:05:53.771 16:00:12 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.771 EAL: Detected CPU lcores: 10 00:05:53.771 EAL: Detected NUMA nodes: 1 00:05:53.771 EAL: Detected shared linkage of DPDK 00:05:53.771 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:53.771 EAL: Selected IOVA mode 'PA' 00:05:53.771 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:54.030 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:54.030 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:54.030 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:54.030 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:54.030 Starting DPDK initialization... 00:05:54.030 Starting SPDK post initialization... 00:05:54.030 SPDK NVMe probe 00:05:54.030 Attaching to 0000:00:10.0 00:05:54.030 Attaching to 0000:00:11.0 00:05:54.030 Attaching to 0000:00:12.0 00:05:54.030 Attaching to 0000:00:13.0 00:05:54.030 Attached to 0000:00:10.0 00:05:54.030 Attached to 0000:00:11.0 00:05:54.030 Attached to 0000:00:13.0 00:05:54.030 Attached to 0000:00:12.0 00:05:54.030 Cleaning up... 00:05:54.030 00:05:54.030 real 0m0.318s 00:05:54.030 user 0m0.101s 00:05:54.030 sys 0m0.119s 00:05:54.030 ************************************ 00:05:54.030 END TEST env_dpdk_post_init 00:05:54.030 ************************************ 00:05:54.030 16:00:12 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.030 16:00:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:54.030 16:00:12 env -- env/env.sh@26 -- # uname 00:05:54.030 16:00:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:54.030 16:00:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:54.030 16:00:12 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.030 16:00:12 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.030 16:00:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.030 ************************************ 00:05:54.030 START TEST env_mem_callbacks 00:05:54.030 ************************************ 00:05:54.030 16:00:12 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:54.030 EAL: Detected CPU lcores: 10 00:05:54.030 EAL: Detected NUMA nodes: 1 00:05:54.030 EAL: Detected shared linkage of DPDK 00:05:54.030 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:54.030 EAL: Selected IOVA mode 'PA' 00:05:54.290 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:54.290 00:05:54.290 00:05:54.290 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.290 http://cunit.sourceforge.net/ 00:05:54.290 00:05:54.290 00:05:54.290 Suite: memory 00:05:54.290 Test: test ... 00:05:54.290 register 0x200000200000 2097152 00:05:54.290 malloc 3145728 00:05:54.290 register 0x200000400000 4194304 00:05:54.290 buf 0x2000004fffc0 len 3145728 PASSED 00:05:54.290 malloc 64 00:05:54.290 buf 0x2000004ffec0 len 64 PASSED 00:05:54.290 malloc 4194304 00:05:54.290 register 0x200000800000 6291456 00:05:54.290 buf 0x2000009fffc0 len 4194304 PASSED 00:05:54.290 free 0x2000004fffc0 3145728 00:05:54.290 free 0x2000004ffec0 64 00:05:54.290 unregister 0x200000400000 4194304 PASSED 00:05:54.290 free 0x2000009fffc0 4194304 00:05:54.290 unregister 0x200000800000 6291456 PASSED 00:05:54.290 malloc 8388608 00:05:54.290 register 0x200000400000 10485760 00:05:54.290 buf 0x2000005fffc0 len 8388608 PASSED 00:05:54.290 free 0x2000005fffc0 8388608 00:05:54.290 unregister 0x200000400000 10485760 PASSED 00:05:54.290 passed 00:05:54.290 00:05:54.290 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.290 suites 1 1 n/a 0 0 00:05:54.290 tests 1 1 1 0 0 00:05:54.290 asserts 15 15 15 0 n/a 00:05:54.290 00:05:54.290 Elapsed time = 0.078 seconds 00:05:54.290 00:05:54.290 real 0m0.285s 00:05:54.290 user 0m0.110s 00:05:54.290 sys 0m0.071s 00:05:54.290 16:00:12 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.290 16:00:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:54.290 ************************************ 00:05:54.290 END TEST env_mem_callbacks 00:05:54.290 ************************************ 00:05:54.549 00:05:54.549 real 0m10.090s 00:05:54.549 user 0m8.129s 00:05:54.549 sys 0m1.584s 00:05:54.549 16:00:13 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.549 16:00:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.549 ************************************ 00:05:54.549 END TEST env 00:05:54.549 ************************************ 00:05:54.549 16:00:13 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:54.549 16:00:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.549 16:00:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.549 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.549 ************************************ 00:05:54.549 START TEST rpc 00:05:54.549 ************************************ 00:05:54.549 16:00:13 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:54.549 * Looking for test storage... 00:05:54.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.549 16:00:13 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:54.549 16:00:13 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:54.549 16:00:13 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:54.808 16:00:13 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:54.808 16:00:13 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.808 16:00:13 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.808 16:00:13 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.808 16:00:13 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.808 16:00:13 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.808 16:00:13 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.808 16:00:13 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.808 16:00:13 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.808 16:00:13 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.808 16:00:13 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.808 16:00:13 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.808 16:00:13 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:54.808 16:00:13 rpc -- scripts/common.sh@345 -- # : 1 00:05:54.808 16:00:13 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.808 16:00:13 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.808 16:00:13 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:54.808 16:00:13 rpc -- scripts/common.sh@353 -- # local d=1 00:05:54.808 16:00:13 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.808 16:00:13 rpc -- scripts/common.sh@355 -- # echo 1 00:05:54.808 16:00:13 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.808 16:00:13 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:54.808 16:00:13 rpc -- scripts/common.sh@353 -- # local d=2 00:05:54.808 16:00:13 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.808 16:00:13 rpc -- scripts/common.sh@355 -- # echo 2 00:05:54.808 16:00:13 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.808 16:00:13 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.808 16:00:13 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.808 16:00:13 rpc -- scripts/common.sh@368 -- # return 0 00:05:54.808 16:00:13 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.808 16:00:13 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:54.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.808 --rc genhtml_branch_coverage=1 00:05:54.808 --rc genhtml_function_coverage=1 00:05:54.808 --rc genhtml_legend=1 00:05:54.808 --rc geninfo_all_blocks=1 00:05:54.808 --rc geninfo_unexecuted_blocks=1 00:05:54.808 00:05:54.808 ' 00:05:54.808 16:00:13 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:54.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.808 --rc genhtml_branch_coverage=1 00:05:54.808 --rc genhtml_function_coverage=1 00:05:54.808 --rc genhtml_legend=1 00:05:54.808 --rc geninfo_all_blocks=1 00:05:54.808 --rc geninfo_unexecuted_blocks=1 00:05:54.808 00:05:54.808 ' 00:05:54.808 16:00:13 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:54.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.808 --rc genhtml_branch_coverage=1 00:05:54.808 --rc genhtml_function_coverage=1 00:05:54.808 --rc genhtml_legend=1 00:05:54.808 --rc geninfo_all_blocks=1 00:05:54.808 --rc geninfo_unexecuted_blocks=1 00:05:54.808 00:05:54.808 ' 00:05:54.808 16:00:13 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:54.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.808 --rc genhtml_branch_coverage=1 00:05:54.808 --rc genhtml_function_coverage=1 00:05:54.808 --rc genhtml_legend=1 00:05:54.808 --rc geninfo_all_blocks=1 00:05:54.808 --rc geninfo_unexecuted_blocks=1 00:05:54.808 00:05:54.808 ' 00:05:54.808 16:00:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57783 00:05:54.808 16:00:13 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:54.808 16:00:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.808 16:00:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57783 00:05:54.808 16:00:13 rpc -- common/autotest_common.sh@833 -- # '[' -z 57783 ']' 00:05:54.808 16:00:13 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.808 16:00:13 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:54.808 16:00:13 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.808 16:00:13 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:54.808 16:00:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.808 [2024-11-04 16:00:13.443855] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:05:54.808 [2024-11-04 16:00:13.444105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57783 ] 00:05:55.066 [2024-11-04 16:00:13.617466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.066 [2024-11-04 16:00:13.730270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:55.066 [2024-11-04 16:00:13.730325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57783' to capture a snapshot of events at runtime. 00:05:55.066 [2024-11-04 16:00:13.730338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:55.066 [2024-11-04 16:00:13.730352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:55.066 [2024-11-04 16:00:13.730362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57783 for offline analysis/debug. 00:05:55.066 [2024-11-04 16:00:13.731672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.000 16:00:14 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:56.000 16:00:14 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:56.000 16:00:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:56.000 16:00:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:56.000 16:00:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:56.000 16:00:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:56.000 16:00:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.000 16:00:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.000 16:00:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.000 ************************************ 00:05:56.000 START TEST rpc_integrity 00:05:56.000 ************************************ 00:05:56.000 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:56.000 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:56.000 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.000 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.000 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.000 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:56.000 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:56.000 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:56.000 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:56.000 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.000 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.000 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.000 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:56.000 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:56.000 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.000 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:56.260 { 00:05:56.260 "name": "Malloc0", 00:05:56.260 "aliases": [ 00:05:56.260 "5f5268d5-927b-4a67-9652-3feb697ec316" 00:05:56.260 ], 00:05:56.260 "product_name": "Malloc disk", 00:05:56.260 "block_size": 512, 00:05:56.260 "num_blocks": 16384, 00:05:56.260 "uuid": "5f5268d5-927b-4a67-9652-3feb697ec316", 00:05:56.260 "assigned_rate_limits": { 00:05:56.260 "rw_ios_per_sec": 0, 00:05:56.260 "rw_mbytes_per_sec": 0, 00:05:56.260 "r_mbytes_per_sec": 0, 00:05:56.260 "w_mbytes_per_sec": 0 00:05:56.260 }, 00:05:56.260 "claimed": false, 00:05:56.260 "zoned": false, 00:05:56.260 "supported_io_types": { 00:05:56.260 "read": true, 00:05:56.260 "write": true, 00:05:56.260 "unmap": true, 00:05:56.260 "flush": true, 00:05:56.260 "reset": true, 00:05:56.260 "nvme_admin": false, 00:05:56.260 "nvme_io": false, 00:05:56.260 "nvme_io_md": false, 00:05:56.260 "write_zeroes": true, 00:05:56.260 "zcopy": true, 00:05:56.260 "get_zone_info": false, 00:05:56.260 "zone_management": false, 00:05:56.260 "zone_append": false, 00:05:56.260 "compare": false, 00:05:56.260 "compare_and_write": false, 00:05:56.260 "abort": true, 00:05:56.260 "seek_hole": false, 00:05:56.260 "seek_data": false, 00:05:56.260 "copy": true, 00:05:56.260 "nvme_iov_md": false 00:05:56.260 }, 00:05:56.260 "memory_domains": [ 00:05:56.260 { 00:05:56.260 "dma_device_id": "system", 00:05:56.260 "dma_device_type": 1 00:05:56.260 }, 00:05:56.260 { 00:05:56.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.260 "dma_device_type": 2 00:05:56.260 } 00:05:56.260 ], 00:05:56.260 "driver_specific": {} 00:05:56.260 } 00:05:56.260 ]' 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.260 [2024-11-04 16:00:14.793134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:56.260 [2024-11-04 16:00:14.793308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:56.260 [2024-11-04 16:00:14.793362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:56.260 [2024-11-04 16:00:14.793381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:56.260 [2024-11-04 16:00:14.795974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:56.260 [2024-11-04 16:00:14.796024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:56.260 Passthru0 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:56.260 { 00:05:56.260 "name": "Malloc0", 00:05:56.260 "aliases": [ 00:05:56.260 "5f5268d5-927b-4a67-9652-3feb697ec316" 00:05:56.260 ], 00:05:56.260 "product_name": "Malloc disk", 00:05:56.260 "block_size": 512, 00:05:56.260 "num_blocks": 16384, 00:05:56.260 "uuid": "5f5268d5-927b-4a67-9652-3feb697ec316", 00:05:56.260 "assigned_rate_limits": { 00:05:56.260 "rw_ios_per_sec": 0, 00:05:56.260 "rw_mbytes_per_sec": 0, 00:05:56.260 "r_mbytes_per_sec": 0, 00:05:56.260 "w_mbytes_per_sec": 0 00:05:56.260 }, 00:05:56.260 "claimed": true, 00:05:56.260 "claim_type": "exclusive_write", 00:05:56.260 "zoned": false, 00:05:56.260 "supported_io_types": { 00:05:56.260 "read": true, 00:05:56.260 "write": true, 00:05:56.260 "unmap": true, 00:05:56.260 "flush": true, 00:05:56.260 "reset": true, 00:05:56.260 "nvme_admin": false, 00:05:56.260 "nvme_io": false, 00:05:56.260 "nvme_io_md": false, 00:05:56.260 "write_zeroes": true, 00:05:56.260 "zcopy": true, 00:05:56.260 "get_zone_info": false, 00:05:56.260 "zone_management": false, 00:05:56.260 "zone_append": false, 00:05:56.260 "compare": false, 00:05:56.260 "compare_and_write": false, 00:05:56.260 "abort": true, 00:05:56.260 "seek_hole": false, 00:05:56.260 "seek_data": false, 00:05:56.260 "copy": true, 00:05:56.260 "nvme_iov_md": false 00:05:56.260 }, 00:05:56.260 "memory_domains": [ 00:05:56.260 { 00:05:56.260 "dma_device_id": "system", 00:05:56.260 "dma_device_type": 1 00:05:56.260 }, 00:05:56.260 { 00:05:56.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.260 "dma_device_type": 2 00:05:56.260 } 00:05:56.260 ], 00:05:56.260 "driver_specific": {} 00:05:56.260 }, 00:05:56.260 { 00:05:56.260 "name": "Passthru0", 00:05:56.260 "aliases": [ 00:05:56.260 "265d9632-f1c8-5732-98bb-3df644a4056e" 00:05:56.260 ], 00:05:56.260 "product_name": "passthru", 00:05:56.260 "block_size": 512, 00:05:56.260 "num_blocks": 16384, 00:05:56.260 "uuid": "265d9632-f1c8-5732-98bb-3df644a4056e", 00:05:56.260 "assigned_rate_limits": { 00:05:56.260 "rw_ios_per_sec": 0, 00:05:56.260 "rw_mbytes_per_sec": 0, 00:05:56.260 "r_mbytes_per_sec": 0, 00:05:56.260 "w_mbytes_per_sec": 0 00:05:56.260 }, 00:05:56.260 "claimed": false, 00:05:56.260 "zoned": false, 00:05:56.260 "supported_io_types": { 00:05:56.260 "read": true, 00:05:56.260 "write": true, 00:05:56.260 "unmap": true, 00:05:56.260 "flush": true, 00:05:56.260 "reset": true, 00:05:56.260 "nvme_admin": false, 00:05:56.260 "nvme_io": false, 00:05:56.260 "nvme_io_md": false, 00:05:56.260 "write_zeroes": true, 00:05:56.260 "zcopy": true, 00:05:56.260 "get_zone_info": false, 00:05:56.260 "zone_management": false, 00:05:56.260 "zone_append": false, 00:05:56.260 "compare": false, 00:05:56.260 "compare_and_write": false, 00:05:56.260 "abort": true, 00:05:56.260 "seek_hole": false, 00:05:56.260 "seek_data": false, 00:05:56.260 "copy": true, 00:05:56.260 "nvme_iov_md": false 00:05:56.260 }, 00:05:56.260 "memory_domains": [ 00:05:56.260 { 00:05:56.260 "dma_device_id": "system", 00:05:56.260 "dma_device_type": 1 00:05:56.260 }, 00:05:56.260 { 00:05:56.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.260 "dma_device_type": 2 00:05:56.260 } 00:05:56.260 ], 00:05:56.260 "driver_specific": { 00:05:56.260 "passthru": { 00:05:56.260 "name": "Passthru0", 00:05:56.260 "base_bdev_name": "Malloc0" 00:05:56.260 } 00:05:56.260 } 00:05:56.260 } 00:05:56.260 ]' 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:56.260 ************************************ 00:05:56.260 END TEST rpc_integrity 00:05:56.260 ************************************ 00:05:56.260 16:00:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:56.260 00:05:56.260 real 0m0.330s 00:05:56.260 user 0m0.180s 00:05:56.260 sys 0m0.051s 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.260 16:00:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.521 16:00:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:56.521 16:00:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.521 16:00:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.521 16:00:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.521 ************************************ 00:05:56.521 START TEST rpc_plugins 00:05:56.521 ************************************ 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:56.521 16:00:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.521 16:00:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:56.521 16:00:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.521 16:00:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:56.521 { 00:05:56.521 "name": "Malloc1", 00:05:56.521 "aliases": [ 00:05:56.521 "6114777f-bc1e-4ac2-ab86-24399d669f79" 00:05:56.521 ], 00:05:56.521 "product_name": "Malloc disk", 00:05:56.521 "block_size": 4096, 00:05:56.521 "num_blocks": 256, 00:05:56.521 "uuid": "6114777f-bc1e-4ac2-ab86-24399d669f79", 00:05:56.521 "assigned_rate_limits": { 00:05:56.521 "rw_ios_per_sec": 0, 00:05:56.521 "rw_mbytes_per_sec": 0, 00:05:56.521 "r_mbytes_per_sec": 0, 00:05:56.521 "w_mbytes_per_sec": 0 00:05:56.521 }, 00:05:56.521 "claimed": false, 00:05:56.521 "zoned": false, 00:05:56.521 "supported_io_types": { 00:05:56.521 "read": true, 00:05:56.521 "write": true, 00:05:56.521 "unmap": true, 00:05:56.521 "flush": true, 00:05:56.521 "reset": true, 00:05:56.521 "nvme_admin": false, 00:05:56.521 "nvme_io": false, 00:05:56.521 "nvme_io_md": false, 00:05:56.521 "write_zeroes": true, 00:05:56.521 "zcopy": true, 00:05:56.521 "get_zone_info": false, 00:05:56.521 "zone_management": false, 00:05:56.521 "zone_append": false, 00:05:56.521 "compare": false, 00:05:56.521 "compare_and_write": false, 00:05:56.521 "abort": true, 00:05:56.521 "seek_hole": false, 00:05:56.521 "seek_data": false, 00:05:56.521 "copy": true, 00:05:56.521 "nvme_iov_md": false 00:05:56.521 }, 00:05:56.521 "memory_domains": [ 00:05:56.521 { 00:05:56.521 "dma_device_id": "system", 00:05:56.521 "dma_device_type": 1 00:05:56.521 }, 00:05:56.521 { 00:05:56.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.521 "dma_device_type": 2 00:05:56.521 } 00:05:56.521 ], 00:05:56.521 "driver_specific": {} 00:05:56.521 } 00:05:56.521 ]' 00:05:56.521 16:00:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:56.521 16:00:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:56.521 16:00:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.521 16:00:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.521 16:00:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:56.521 16:00:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:56.521 ************************************ 00:05:56.521 END TEST rpc_plugins 00:05:56.521 ************************************ 00:05:56.521 16:00:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:56.521 00:05:56.521 real 0m0.157s 00:05:56.521 user 0m0.081s 00:05:56.521 sys 0m0.030s 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.521 16:00:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.783 16:00:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:56.783 16:00:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.783 16:00:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.783 16:00:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.783 ************************************ 00:05:56.783 START TEST rpc_trace_cmd_test 00:05:56.783 ************************************ 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:56.783 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57783", 00:05:56.783 "tpoint_group_mask": "0x8", 00:05:56.783 "iscsi_conn": { 00:05:56.783 "mask": "0x2", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "scsi": { 00:05:56.783 "mask": "0x4", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "bdev": { 00:05:56.783 "mask": "0x8", 00:05:56.783 "tpoint_mask": "0xffffffffffffffff" 00:05:56.783 }, 00:05:56.783 "nvmf_rdma": { 00:05:56.783 "mask": "0x10", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "nvmf_tcp": { 00:05:56.783 "mask": "0x20", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "ftl": { 00:05:56.783 "mask": "0x40", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "blobfs": { 00:05:56.783 "mask": "0x80", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "dsa": { 00:05:56.783 "mask": "0x200", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "thread": { 00:05:56.783 "mask": "0x400", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "nvme_pcie": { 00:05:56.783 "mask": "0x800", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "iaa": { 00:05:56.783 "mask": "0x1000", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "nvme_tcp": { 00:05:56.783 "mask": "0x2000", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "bdev_nvme": { 00:05:56.783 "mask": "0x4000", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "sock": { 00:05:56.783 "mask": "0x8000", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "blob": { 00:05:56.783 "mask": "0x10000", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "bdev_raid": { 00:05:56.783 "mask": "0x20000", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 }, 00:05:56.783 "scheduler": { 00:05:56.783 "mask": "0x40000", 00:05:56.783 "tpoint_mask": "0x0" 00:05:56.783 } 00:05:56.783 }' 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:56.783 16:00:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:57.042 ************************************ 00:05:57.042 END TEST rpc_trace_cmd_test 00:05:57.042 ************************************ 00:05:57.042 16:00:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:57.042 00:05:57.042 real 0m0.264s 00:05:57.042 user 0m0.214s 00:05:57.042 sys 0m0.041s 00:05:57.042 16:00:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.042 16:00:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:57.042 16:00:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:57.042 16:00:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:57.042 16:00:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:57.042 16:00:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:57.042 16:00:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.042 16:00:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.042 ************************************ 00:05:57.042 START TEST rpc_daemon_integrity 00:05:57.042 ************************************ 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.042 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:57.042 { 00:05:57.042 "name": "Malloc2", 00:05:57.042 "aliases": [ 00:05:57.042 "2bad6457-6901-4821-ae15-fb0a3664dc44" 00:05:57.042 ], 00:05:57.042 "product_name": "Malloc disk", 00:05:57.042 "block_size": 512, 00:05:57.042 "num_blocks": 16384, 00:05:57.042 "uuid": "2bad6457-6901-4821-ae15-fb0a3664dc44", 00:05:57.042 "assigned_rate_limits": { 00:05:57.042 "rw_ios_per_sec": 0, 00:05:57.042 "rw_mbytes_per_sec": 0, 00:05:57.042 "r_mbytes_per_sec": 0, 00:05:57.042 "w_mbytes_per_sec": 0 00:05:57.042 }, 00:05:57.042 "claimed": false, 00:05:57.042 "zoned": false, 00:05:57.042 "supported_io_types": { 00:05:57.042 "read": true, 00:05:57.042 "write": true, 00:05:57.042 "unmap": true, 00:05:57.042 "flush": true, 00:05:57.042 "reset": true, 00:05:57.042 "nvme_admin": false, 00:05:57.042 "nvme_io": false, 00:05:57.042 "nvme_io_md": false, 00:05:57.042 "write_zeroes": true, 00:05:57.042 "zcopy": true, 00:05:57.042 "get_zone_info": false, 00:05:57.042 "zone_management": false, 00:05:57.042 "zone_append": false, 00:05:57.042 "compare": false, 00:05:57.042 "compare_and_write": false, 00:05:57.042 "abort": true, 00:05:57.042 "seek_hole": false, 00:05:57.042 "seek_data": false, 00:05:57.042 "copy": true, 00:05:57.042 "nvme_iov_md": false 00:05:57.042 }, 00:05:57.042 "memory_domains": [ 00:05:57.042 { 00:05:57.042 "dma_device_id": "system", 00:05:57.042 "dma_device_type": 1 00:05:57.042 }, 00:05:57.042 { 00:05:57.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.043 "dma_device_type": 2 00:05:57.043 } 00:05:57.043 ], 00:05:57.043 "driver_specific": {} 00:05:57.043 } 00:05:57.043 ]' 00:05:57.043 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:57.043 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:57.043 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:57.043 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.043 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.043 [2024-11-04 16:00:15.761577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:57.043 [2024-11-04 16:00:15.761655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.043 [2024-11-04 16:00:15.761682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:57.043 [2024-11-04 16:00:15.761698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.301 [2024-11-04 16:00:15.764482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.301 [2024-11-04 16:00:15.764681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:57.301 Passthru0 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:57.302 { 00:05:57.302 "name": "Malloc2", 00:05:57.302 "aliases": [ 00:05:57.302 "2bad6457-6901-4821-ae15-fb0a3664dc44" 00:05:57.302 ], 00:05:57.302 "product_name": "Malloc disk", 00:05:57.302 "block_size": 512, 00:05:57.302 "num_blocks": 16384, 00:05:57.302 "uuid": "2bad6457-6901-4821-ae15-fb0a3664dc44", 00:05:57.302 "assigned_rate_limits": { 00:05:57.302 "rw_ios_per_sec": 0, 00:05:57.302 "rw_mbytes_per_sec": 0, 00:05:57.302 "r_mbytes_per_sec": 0, 00:05:57.302 "w_mbytes_per_sec": 0 00:05:57.302 }, 00:05:57.302 "claimed": true, 00:05:57.302 "claim_type": "exclusive_write", 00:05:57.302 "zoned": false, 00:05:57.302 "supported_io_types": { 00:05:57.302 "read": true, 00:05:57.302 "write": true, 00:05:57.302 "unmap": true, 00:05:57.302 "flush": true, 00:05:57.302 "reset": true, 00:05:57.302 "nvme_admin": false, 00:05:57.302 "nvme_io": false, 00:05:57.302 "nvme_io_md": false, 00:05:57.302 "write_zeroes": true, 00:05:57.302 "zcopy": true, 00:05:57.302 "get_zone_info": false, 00:05:57.302 "zone_management": false, 00:05:57.302 "zone_append": false, 00:05:57.302 "compare": false, 00:05:57.302 "compare_and_write": false, 00:05:57.302 "abort": true, 00:05:57.302 "seek_hole": false, 00:05:57.302 "seek_data": false, 00:05:57.302 "copy": true, 00:05:57.302 "nvme_iov_md": false 00:05:57.302 }, 00:05:57.302 "memory_domains": [ 00:05:57.302 { 00:05:57.302 "dma_device_id": "system", 00:05:57.302 "dma_device_type": 1 00:05:57.302 }, 00:05:57.302 { 00:05:57.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.302 "dma_device_type": 2 00:05:57.302 } 00:05:57.302 ], 00:05:57.302 "driver_specific": {} 00:05:57.302 }, 00:05:57.302 { 00:05:57.302 "name": "Passthru0", 00:05:57.302 "aliases": [ 00:05:57.302 "81c1798f-112c-575f-a5c0-4344385f0841" 00:05:57.302 ], 00:05:57.302 "product_name": "passthru", 00:05:57.302 "block_size": 512, 00:05:57.302 "num_blocks": 16384, 00:05:57.302 "uuid": "81c1798f-112c-575f-a5c0-4344385f0841", 00:05:57.302 "assigned_rate_limits": { 00:05:57.302 "rw_ios_per_sec": 0, 00:05:57.302 "rw_mbytes_per_sec": 0, 00:05:57.302 "r_mbytes_per_sec": 0, 00:05:57.302 "w_mbytes_per_sec": 0 00:05:57.302 }, 00:05:57.302 "claimed": false, 00:05:57.302 "zoned": false, 00:05:57.302 "supported_io_types": { 00:05:57.302 "read": true, 00:05:57.302 "write": true, 00:05:57.302 "unmap": true, 00:05:57.302 "flush": true, 00:05:57.302 "reset": true, 00:05:57.302 "nvme_admin": false, 00:05:57.302 "nvme_io": false, 00:05:57.302 "nvme_io_md": false, 00:05:57.302 "write_zeroes": true, 00:05:57.302 "zcopy": true, 00:05:57.302 "get_zone_info": false, 00:05:57.302 "zone_management": false, 00:05:57.302 "zone_append": false, 00:05:57.302 "compare": false, 00:05:57.302 "compare_and_write": false, 00:05:57.302 "abort": true, 00:05:57.302 "seek_hole": false, 00:05:57.302 "seek_data": false, 00:05:57.302 "copy": true, 00:05:57.302 "nvme_iov_md": false 00:05:57.302 }, 00:05:57.302 "memory_domains": [ 00:05:57.302 { 00:05:57.302 "dma_device_id": "system", 00:05:57.302 "dma_device_type": 1 00:05:57.302 }, 00:05:57.302 { 00:05:57.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.302 "dma_device_type": 2 00:05:57.302 } 00:05:57.302 ], 00:05:57.302 "driver_specific": { 00:05:57.302 "passthru": { 00:05:57.302 "name": "Passthru0", 00:05:57.302 "base_bdev_name": "Malloc2" 00:05:57.302 } 00:05:57.302 } 00:05:57.302 } 00:05:57.302 ]' 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:57.302 ************************************ 00:05:57.302 END TEST rpc_daemon_integrity 00:05:57.302 ************************************ 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:57.302 00:05:57.302 real 0m0.355s 00:05:57.302 user 0m0.188s 00:05:57.302 sys 0m0.060s 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.302 16:00:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.302 16:00:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:57.302 16:00:16 rpc -- rpc/rpc.sh@84 -- # killprocess 57783 00:05:57.302 16:00:16 rpc -- common/autotest_common.sh@952 -- # '[' -z 57783 ']' 00:05:57.302 16:00:16 rpc -- common/autotest_common.sh@956 -- # kill -0 57783 00:05:57.302 16:00:16 rpc -- common/autotest_common.sh@957 -- # uname 00:05:57.561 16:00:16 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:57.561 16:00:16 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57783 00:05:57.561 killing process with pid 57783 00:05:57.561 16:00:16 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:57.561 16:00:16 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:57.561 16:00:16 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57783' 00:05:57.561 16:00:16 rpc -- common/autotest_common.sh@971 -- # kill 57783 00:05:57.561 16:00:16 rpc -- common/autotest_common.sh@976 -- # wait 57783 00:06:00.101 00:06:00.101 real 0m5.370s 00:06:00.101 user 0m5.875s 00:06:00.101 sys 0m0.981s 00:06:00.102 16:00:18 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.102 16:00:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.102 ************************************ 00:06:00.102 END TEST rpc 00:06:00.102 ************************************ 00:06:00.102 16:00:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:00.102 16:00:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.102 16:00:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.102 16:00:18 -- common/autotest_common.sh@10 -- # set +x 00:06:00.102 ************************************ 00:06:00.102 START TEST skip_rpc 00:06:00.102 ************************************ 00:06:00.102 16:00:18 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:00.102 * Looking for test storage... 00:06:00.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:00.102 16:00:18 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.102 16:00:18 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.102 16:00:18 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.102 16:00:18 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.102 16:00:18 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:00.102 16:00:18 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.102 16:00:18 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.102 --rc genhtml_branch_coverage=1 00:06:00.102 --rc genhtml_function_coverage=1 00:06:00.102 --rc genhtml_legend=1 00:06:00.102 --rc geninfo_all_blocks=1 00:06:00.102 --rc geninfo_unexecuted_blocks=1 00:06:00.102 00:06:00.102 ' 00:06:00.102 16:00:18 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.102 --rc genhtml_branch_coverage=1 00:06:00.102 --rc genhtml_function_coverage=1 00:06:00.102 --rc genhtml_legend=1 00:06:00.102 --rc geninfo_all_blocks=1 00:06:00.102 --rc geninfo_unexecuted_blocks=1 00:06:00.102 00:06:00.102 ' 00:06:00.102 16:00:18 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.102 --rc genhtml_branch_coverage=1 00:06:00.102 --rc genhtml_function_coverage=1 00:06:00.102 --rc genhtml_legend=1 00:06:00.102 --rc geninfo_all_blocks=1 00:06:00.102 --rc geninfo_unexecuted_blocks=1 00:06:00.102 00:06:00.102 ' 00:06:00.102 16:00:18 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.102 --rc genhtml_branch_coverage=1 00:06:00.102 --rc genhtml_function_coverage=1 00:06:00.102 --rc genhtml_legend=1 00:06:00.102 --rc geninfo_all_blocks=1 00:06:00.102 --rc geninfo_unexecuted_blocks=1 00:06:00.102 00:06:00.102 ' 00:06:00.102 16:00:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:00.102 16:00:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:00.102 16:00:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:00.102 16:00:18 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.102 16:00:18 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.102 16:00:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.102 ************************************ 00:06:00.102 START TEST skip_rpc 00:06:00.102 ************************************ 00:06:00.102 16:00:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:06:00.102 16:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58018 00:06:00.102 16:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:00.102 16:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.102 16:00:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:00.361 [2024-11-04 16:00:18.875954] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:06:00.361 [2024-11-04 16:00:18.876408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58018 ] 00:06:00.361 [2024-11-04 16:00:19.057785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.619 [2024-11-04 16:00:19.171140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58018 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 58018 ']' 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 58018 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58018 00:06:05.887 killing process with pid 58018 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58018' 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 58018 00:06:05.887 16:00:23 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 58018 00:06:07.787 00:06:07.787 real 0m7.464s 00:06:07.787 user 0m6.992s 00:06:07.787 sys 0m0.392s 00:06:07.787 16:00:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:07.787 16:00:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.787 ************************************ 00:06:07.787 END TEST skip_rpc 00:06:07.787 ************************************ 00:06:07.787 16:00:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:07.787 16:00:26 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:07.787 16:00:26 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.787 16:00:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.787 ************************************ 00:06:07.787 START TEST skip_rpc_with_json 00:06:07.787 ************************************ 00:06:07.787 16:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:06:07.787 16:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:07.787 16:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58128 00:06:07.787 16:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.787 16:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58128 00:06:07.787 16:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 58128 ']' 00:06:07.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.787 16:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.787 16:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.787 16:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.787 16:00:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.787 16:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.787 16:00:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.787 [2024-11-04 16:00:26.400996] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:06:07.787 [2024-11-04 16:00:26.401315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58128 ] 00:06:08.045 [2024-11-04 16:00:26.580583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.045 [2024-11-04 16:00:26.692346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.981 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.981 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:06:08.981 16:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:08.981 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.981 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.981 [2024-11-04 16:00:27.557147] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:08.981 request: 00:06:08.981 { 00:06:08.981 "trtype": "tcp", 00:06:08.982 "method": "nvmf_get_transports", 00:06:08.982 "req_id": 1 00:06:08.982 } 00:06:08.982 Got JSON-RPC error response 00:06:08.982 response: 00:06:08.982 { 00:06:08.982 "code": -19, 00:06:08.982 "message": "No such device" 00:06:08.982 } 00:06:08.982 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:08.982 16:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:08.982 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.982 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.982 [2024-11-04 16:00:27.569275] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.982 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.982 16:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:08.982 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.982 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.240 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.240 16:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:09.240 { 00:06:09.240 "subsystems": [ 00:06:09.240 { 00:06:09.240 "subsystem": "fsdev", 00:06:09.240 "config": [ 00:06:09.240 { 00:06:09.240 "method": "fsdev_set_opts", 00:06:09.240 "params": { 00:06:09.240 "fsdev_io_pool_size": 65535, 00:06:09.240 "fsdev_io_cache_size": 256 00:06:09.240 } 00:06:09.240 } 00:06:09.240 ] 00:06:09.240 }, 00:06:09.240 { 00:06:09.240 "subsystem": "keyring", 00:06:09.240 "config": [] 00:06:09.240 }, 00:06:09.240 { 00:06:09.241 "subsystem": "iobuf", 00:06:09.241 "config": [ 00:06:09.241 { 00:06:09.241 "method": "iobuf_set_options", 00:06:09.241 "params": { 00:06:09.241 "small_pool_count": 8192, 00:06:09.241 "large_pool_count": 1024, 00:06:09.241 "small_bufsize": 8192, 00:06:09.241 "large_bufsize": 135168, 00:06:09.241 "enable_numa": false 00:06:09.241 } 00:06:09.241 } 00:06:09.241 ] 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "subsystem": "sock", 00:06:09.241 "config": [ 00:06:09.241 { 00:06:09.241 "method": "sock_set_default_impl", 00:06:09.241 "params": { 00:06:09.241 "impl_name": "posix" 00:06:09.241 } 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "method": "sock_impl_set_options", 00:06:09.241 "params": { 00:06:09.241 "impl_name": "ssl", 00:06:09.241 "recv_buf_size": 4096, 00:06:09.241 "send_buf_size": 4096, 00:06:09.241 "enable_recv_pipe": true, 00:06:09.241 "enable_quickack": false, 00:06:09.241 "enable_placement_id": 0, 00:06:09.241 "enable_zerocopy_send_server": true, 00:06:09.241 "enable_zerocopy_send_client": false, 00:06:09.241 "zerocopy_threshold": 0, 00:06:09.241 "tls_version": 0, 00:06:09.241 "enable_ktls": false 00:06:09.241 } 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "method": "sock_impl_set_options", 00:06:09.241 "params": { 00:06:09.241 "impl_name": "posix", 00:06:09.241 "recv_buf_size": 2097152, 00:06:09.241 "send_buf_size": 2097152, 00:06:09.241 "enable_recv_pipe": true, 00:06:09.241 "enable_quickack": false, 00:06:09.241 "enable_placement_id": 0, 00:06:09.241 "enable_zerocopy_send_server": true, 00:06:09.241 "enable_zerocopy_send_client": false, 00:06:09.241 "zerocopy_threshold": 0, 00:06:09.241 "tls_version": 0, 00:06:09.241 "enable_ktls": false 00:06:09.241 } 00:06:09.241 } 00:06:09.241 ] 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "subsystem": "vmd", 00:06:09.241 "config": [] 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "subsystem": "accel", 00:06:09.241 "config": [ 00:06:09.241 { 00:06:09.241 "method": "accel_set_options", 00:06:09.241 "params": { 00:06:09.241 "small_cache_size": 128, 00:06:09.241 "large_cache_size": 16, 00:06:09.241 "task_count": 2048, 00:06:09.241 "sequence_count": 2048, 00:06:09.241 "buf_count": 2048 00:06:09.241 } 00:06:09.241 } 00:06:09.241 ] 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "subsystem": "bdev", 00:06:09.241 "config": [ 00:06:09.241 { 00:06:09.241 "method": "bdev_set_options", 00:06:09.241 "params": { 00:06:09.241 "bdev_io_pool_size": 65535, 00:06:09.241 "bdev_io_cache_size": 256, 00:06:09.241 "bdev_auto_examine": true, 00:06:09.241 "iobuf_small_cache_size": 128, 00:06:09.241 "iobuf_large_cache_size": 16 00:06:09.241 } 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "method": "bdev_raid_set_options", 00:06:09.241 "params": { 00:06:09.241 "process_window_size_kb": 1024, 00:06:09.241 "process_max_bandwidth_mb_sec": 0 00:06:09.241 } 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "method": "bdev_iscsi_set_options", 00:06:09.241 "params": { 00:06:09.241 "timeout_sec": 30 00:06:09.241 } 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "method": "bdev_nvme_set_options", 00:06:09.241 "params": { 00:06:09.241 "action_on_timeout": "none", 00:06:09.241 "timeout_us": 0, 00:06:09.241 "timeout_admin_us": 0, 00:06:09.241 "keep_alive_timeout_ms": 10000, 00:06:09.241 "arbitration_burst": 0, 00:06:09.241 "low_priority_weight": 0, 00:06:09.241 "medium_priority_weight": 0, 00:06:09.241 "high_priority_weight": 0, 00:06:09.241 "nvme_adminq_poll_period_us": 10000, 00:06:09.241 "nvme_ioq_poll_period_us": 0, 00:06:09.241 "io_queue_requests": 0, 00:06:09.241 "delay_cmd_submit": true, 00:06:09.241 "transport_retry_count": 4, 00:06:09.241 "bdev_retry_count": 3, 00:06:09.241 "transport_ack_timeout": 0, 00:06:09.241 "ctrlr_loss_timeout_sec": 0, 00:06:09.241 "reconnect_delay_sec": 0, 00:06:09.241 "fast_io_fail_timeout_sec": 0, 00:06:09.241 "disable_auto_failback": false, 00:06:09.241 "generate_uuids": false, 00:06:09.241 "transport_tos": 0, 00:06:09.241 "nvme_error_stat": false, 00:06:09.241 "rdma_srq_size": 0, 00:06:09.241 "io_path_stat": false, 00:06:09.241 "allow_accel_sequence": false, 00:06:09.241 "rdma_max_cq_size": 0, 00:06:09.241 "rdma_cm_event_timeout_ms": 0, 00:06:09.241 "dhchap_digests": [ 00:06:09.241 "sha256", 00:06:09.241 "sha384", 00:06:09.241 "sha512" 00:06:09.241 ], 00:06:09.241 "dhchap_dhgroups": [ 00:06:09.241 "null", 00:06:09.241 "ffdhe2048", 00:06:09.241 "ffdhe3072", 00:06:09.241 "ffdhe4096", 00:06:09.241 "ffdhe6144", 00:06:09.241 "ffdhe8192" 00:06:09.241 ] 00:06:09.241 } 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "method": "bdev_nvme_set_hotplug", 00:06:09.241 "params": { 00:06:09.241 "period_us": 100000, 00:06:09.241 "enable": false 00:06:09.241 } 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "method": "bdev_wait_for_examine" 00:06:09.241 } 00:06:09.241 ] 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "subsystem": "scsi", 00:06:09.241 "config": null 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "subsystem": "scheduler", 00:06:09.241 "config": [ 00:06:09.241 { 00:06:09.241 "method": "framework_set_scheduler", 00:06:09.241 "params": { 00:06:09.241 "name": "static" 00:06:09.241 } 00:06:09.241 } 00:06:09.241 ] 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "subsystem": "vhost_scsi", 00:06:09.241 "config": [] 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "subsystem": "vhost_blk", 00:06:09.241 "config": [] 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "subsystem": "ublk", 00:06:09.241 "config": [] 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "subsystem": "nbd", 00:06:09.241 "config": [] 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "subsystem": "nvmf", 00:06:09.241 "config": [ 00:06:09.241 { 00:06:09.241 "method": "nvmf_set_config", 00:06:09.241 "params": { 00:06:09.241 "discovery_filter": "match_any", 00:06:09.241 "admin_cmd_passthru": { 00:06:09.241 "identify_ctrlr": false 00:06:09.241 }, 00:06:09.241 "dhchap_digests": [ 00:06:09.241 "sha256", 00:06:09.241 "sha384", 00:06:09.241 "sha512" 00:06:09.241 ], 00:06:09.241 "dhchap_dhgroups": [ 00:06:09.241 "null", 00:06:09.241 "ffdhe2048", 00:06:09.241 "ffdhe3072", 00:06:09.241 "ffdhe4096", 00:06:09.241 "ffdhe6144", 00:06:09.241 "ffdhe8192" 00:06:09.241 ] 00:06:09.241 } 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "method": "nvmf_set_max_subsystems", 00:06:09.241 "params": { 00:06:09.241 "max_subsystems": 1024 00:06:09.241 } 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "method": "nvmf_set_crdt", 00:06:09.241 "params": { 00:06:09.241 "crdt1": 0, 00:06:09.241 "crdt2": 0, 00:06:09.241 "crdt3": 0 00:06:09.241 } 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "method": "nvmf_create_transport", 00:06:09.241 "params": { 00:06:09.241 "trtype": "TCP", 00:06:09.241 "max_queue_depth": 128, 00:06:09.241 "max_io_qpairs_per_ctrlr": 127, 00:06:09.241 "in_capsule_data_size": 4096, 00:06:09.241 "max_io_size": 131072, 00:06:09.241 "io_unit_size": 131072, 00:06:09.241 "max_aq_depth": 128, 00:06:09.241 "num_shared_buffers": 511, 00:06:09.241 "buf_cache_size": 4294967295, 00:06:09.241 "dif_insert_or_strip": false, 00:06:09.241 "zcopy": false, 00:06:09.241 "c2h_success": true, 00:06:09.241 "sock_priority": 0, 00:06:09.241 "abort_timeout_sec": 1, 00:06:09.241 "ack_timeout": 0, 00:06:09.241 "data_wr_pool_size": 0 00:06:09.241 } 00:06:09.241 } 00:06:09.241 ] 00:06:09.241 }, 00:06:09.241 { 00:06:09.241 "subsystem": "iscsi", 00:06:09.241 "config": [ 00:06:09.241 { 00:06:09.241 "method": "iscsi_set_options", 00:06:09.241 "params": { 00:06:09.241 "node_base": "iqn.2016-06.io.spdk", 00:06:09.241 "max_sessions": 128, 00:06:09.241 "max_connections_per_session": 2, 00:06:09.241 "max_queue_depth": 64, 00:06:09.241 "default_time2wait": 2, 00:06:09.241 "default_time2retain": 20, 00:06:09.241 "first_burst_length": 8192, 00:06:09.241 "immediate_data": true, 00:06:09.241 "allow_duplicated_isid": false, 00:06:09.241 "error_recovery_level": 0, 00:06:09.241 "nop_timeout": 60, 00:06:09.241 "nop_in_interval": 30, 00:06:09.241 "disable_chap": false, 00:06:09.241 "require_chap": false, 00:06:09.241 "mutual_chap": false, 00:06:09.241 "chap_group": 0, 00:06:09.241 "max_large_datain_per_connection": 64, 00:06:09.241 "max_r2t_per_connection": 4, 00:06:09.241 "pdu_pool_size": 36864, 00:06:09.241 "immediate_data_pool_size": 16384, 00:06:09.241 "data_out_pool_size": 2048 00:06:09.241 } 00:06:09.241 } 00:06:09.241 ] 00:06:09.241 } 00:06:09.241 ] 00:06:09.241 } 00:06:09.241 16:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:09.241 16:00:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58128 00:06:09.241 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58128 ']' 00:06:09.241 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58128 00:06:09.241 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:09.242 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:09.242 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58128 00:06:09.242 killing process with pid 58128 00:06:09.242 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:09.242 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:09.242 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58128' 00:06:09.242 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58128 00:06:09.242 16:00:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58128 00:06:11.804 16:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58183 00:06:11.804 16:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:11.804 16:00:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:17.075 16:00:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58183 00:06:17.075 16:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58183 ']' 00:06:17.075 16:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58183 00:06:17.075 16:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:17.075 16:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:17.075 16:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58183 00:06:17.075 killing process with pid 58183 00:06:17.075 16:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:17.075 16:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:17.075 16:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58183' 00:06:17.075 16:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58183 00:06:17.075 16:00:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58183 00:06:19.606 16:00:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:19.606 16:00:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:19.606 ************************************ 00:06:19.606 END TEST skip_rpc_with_json 00:06:19.606 ************************************ 00:06:19.606 00:06:19.606 real 0m11.601s 00:06:19.606 user 0m11.036s 00:06:19.606 sys 0m0.907s 00:06:19.606 16:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.606 16:00:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.606 16:00:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:19.606 16:00:37 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:19.606 16:00:37 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.606 16:00:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.606 ************************************ 00:06:19.606 START TEST skip_rpc_with_delay 00:06:19.606 ************************************ 00:06:19.606 16:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:06:19.606 16:00:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.606 16:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:19.606 16:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.606 16:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.606 16:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.607 16:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.607 16:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.607 16:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.607 16:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.607 16:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.607 16:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:19.607 16:00:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:19.607 [2024-11-04 16:00:38.080139] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:19.607 16:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:19.607 16:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.607 16:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.607 16:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.607 00:06:19.607 real 0m0.175s 00:06:19.607 user 0m0.086s 00:06:19.607 sys 0m0.088s 00:06:19.607 16:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.607 ************************************ 00:06:19.607 END TEST skip_rpc_with_delay 00:06:19.607 ************************************ 00:06:19.607 16:00:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:19.607 16:00:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:19.607 16:00:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:19.607 16:00:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:19.607 16:00:38 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:19.607 16:00:38 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.607 16:00:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.607 ************************************ 00:06:19.607 START TEST exit_on_failed_rpc_init 00:06:19.607 ************************************ 00:06:19.607 16:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:06:19.607 16:00:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.607 16:00:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58312 00:06:19.607 16:00:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58312 00:06:19.607 16:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 58312 ']' 00:06:19.607 16:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.607 16:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.607 16:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.607 16:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.607 16:00:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:19.864 [2024-11-04 16:00:38.332324] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:06:19.864 [2024-11-04 16:00:38.332448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58312 ] 00:06:19.864 [2024-11-04 16:00:38.514704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.122 [2024-11-04 16:00:38.662400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:21.073 16:00:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.331 [2024-11-04 16:00:39.796717] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:06:21.331 [2024-11-04 16:00:39.796857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58341 ] 00:06:21.331 [2024-11-04 16:00:39.972842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.589 [2024-11-04 16:00:40.148656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.589 [2024-11-04 16:00:40.148807] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:21.589 [2024-11-04 16:00:40.148830] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:21.589 [2024-11-04 16:00:40.148862] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58312 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 58312 ']' 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 58312 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58312 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58312' 00:06:21.848 killing process with pid 58312 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 58312 00:06:21.848 16:00:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 58312 00:06:25.140 ************************************ 00:06:25.140 END TEST exit_on_failed_rpc_init 00:06:25.140 ************************************ 00:06:25.140 00:06:25.140 real 0m4.910s 00:06:25.140 user 0m5.116s 00:06:25.140 sys 0m0.777s 00:06:25.140 16:00:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.140 16:00:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:25.141 16:00:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:25.141 00:06:25.141 real 0m24.655s 00:06:25.141 user 0m23.421s 00:06:25.141 sys 0m2.469s 00:06:25.141 16:00:43 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.141 16:00:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.141 ************************************ 00:06:25.141 END TEST skip_rpc 00:06:25.141 ************************************ 00:06:25.141 16:00:43 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:25.141 16:00:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:25.141 16:00:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.141 16:00:43 -- common/autotest_common.sh@10 -- # set +x 00:06:25.141 ************************************ 00:06:25.141 START TEST rpc_client 00:06:25.141 ************************************ 00:06:25.141 16:00:43 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:25.141 * Looking for test storage... 00:06:25.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:25.141 16:00:43 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:25.141 16:00:43 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:25.141 16:00:43 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:25.141 16:00:43 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.141 16:00:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:25.141 16:00:43 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.141 16:00:43 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:25.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.141 --rc genhtml_branch_coverage=1 00:06:25.141 --rc genhtml_function_coverage=1 00:06:25.141 --rc genhtml_legend=1 00:06:25.141 --rc geninfo_all_blocks=1 00:06:25.141 --rc geninfo_unexecuted_blocks=1 00:06:25.141 00:06:25.141 ' 00:06:25.141 16:00:43 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:25.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.141 --rc genhtml_branch_coverage=1 00:06:25.141 --rc genhtml_function_coverage=1 00:06:25.141 --rc genhtml_legend=1 00:06:25.141 --rc geninfo_all_blocks=1 00:06:25.141 --rc geninfo_unexecuted_blocks=1 00:06:25.141 00:06:25.141 ' 00:06:25.141 16:00:43 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:25.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.141 --rc genhtml_branch_coverage=1 00:06:25.141 --rc genhtml_function_coverage=1 00:06:25.141 --rc genhtml_legend=1 00:06:25.141 --rc geninfo_all_blocks=1 00:06:25.141 --rc geninfo_unexecuted_blocks=1 00:06:25.141 00:06:25.141 ' 00:06:25.141 16:00:43 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:25.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.141 --rc genhtml_branch_coverage=1 00:06:25.141 --rc genhtml_function_coverage=1 00:06:25.141 --rc genhtml_legend=1 00:06:25.141 --rc geninfo_all_blocks=1 00:06:25.141 --rc geninfo_unexecuted_blocks=1 00:06:25.141 00:06:25.141 ' 00:06:25.141 16:00:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:25.141 OK 00:06:25.141 16:00:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:25.141 00:06:25.141 real 0m0.327s 00:06:25.141 user 0m0.183s 00:06:25.141 sys 0m0.165s 00:06:25.141 16:00:43 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.141 16:00:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:25.141 ************************************ 00:06:25.141 END TEST rpc_client 00:06:25.141 ************************************ 00:06:25.141 16:00:43 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:25.141 16:00:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:25.141 16:00:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.141 16:00:43 -- common/autotest_common.sh@10 -- # set +x 00:06:25.141 ************************************ 00:06:25.141 START TEST json_config 00:06:25.141 ************************************ 00:06:25.141 16:00:43 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:25.141 16:00:43 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:25.141 16:00:43 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:25.141 16:00:43 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:25.141 16:00:43 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:25.141 16:00:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.141 16:00:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.141 16:00:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.141 16:00:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.141 16:00:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.141 16:00:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.141 16:00:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.141 16:00:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.141 16:00:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.141 16:00:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.141 16:00:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.141 16:00:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:25.141 16:00:43 json_config -- scripts/common.sh@345 -- # : 1 00:06:25.141 16:00:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.141 16:00:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.141 16:00:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:25.141 16:00:43 json_config -- scripts/common.sh@353 -- # local d=1 00:06:25.141 16:00:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.141 16:00:43 json_config -- scripts/common.sh@355 -- # echo 1 00:06:25.141 16:00:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.141 16:00:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:25.141 16:00:43 json_config -- scripts/common.sh@353 -- # local d=2 00:06:25.141 16:00:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.141 16:00:43 json_config -- scripts/common.sh@355 -- # echo 2 00:06:25.141 16:00:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.141 16:00:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.141 16:00:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.141 16:00:43 json_config -- scripts/common.sh@368 -- # return 0 00:06:25.141 16:00:43 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.141 16:00:43 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:25.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.141 --rc genhtml_branch_coverage=1 00:06:25.141 --rc genhtml_function_coverage=1 00:06:25.141 --rc genhtml_legend=1 00:06:25.141 --rc geninfo_all_blocks=1 00:06:25.141 --rc geninfo_unexecuted_blocks=1 00:06:25.141 00:06:25.141 ' 00:06:25.141 16:00:43 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:25.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.141 --rc genhtml_branch_coverage=1 00:06:25.141 --rc genhtml_function_coverage=1 00:06:25.141 --rc genhtml_legend=1 00:06:25.141 --rc geninfo_all_blocks=1 00:06:25.141 --rc geninfo_unexecuted_blocks=1 00:06:25.141 00:06:25.141 ' 00:06:25.142 16:00:43 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:25.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.142 --rc genhtml_branch_coverage=1 00:06:25.142 --rc genhtml_function_coverage=1 00:06:25.142 --rc genhtml_legend=1 00:06:25.142 --rc geninfo_all_blocks=1 00:06:25.142 --rc geninfo_unexecuted_blocks=1 00:06:25.142 00:06:25.142 ' 00:06:25.142 16:00:43 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:25.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.142 --rc genhtml_branch_coverage=1 00:06:25.142 --rc genhtml_function_coverage=1 00:06:25.142 --rc genhtml_legend=1 00:06:25.142 --rc geninfo_all_blocks=1 00:06:25.142 --rc geninfo_unexecuted_blocks=1 00:06:25.142 00:06:25.142 ' 00:06:25.142 16:00:43 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b307c85-9e07-4f18-80b6-51adc42f99df 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2b307c85-9e07-4f18-80b6-51adc42f99df 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.142 16:00:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.142 16:00:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.142 16:00:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.142 16:00:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.142 16:00:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.142 16:00:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.142 16:00:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.142 16:00:43 json_config -- paths/export.sh@5 -- # export PATH 00:06:25.142 16:00:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@51 -- # : 0 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.142 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.142 16:00:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.142 16:00:43 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:25.142 16:00:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:25.142 16:00:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:25.142 16:00:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:25.142 16:00:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:25.142 WARNING: No tests are enabled so not running JSON configuration tests 00:06:25.142 16:00:43 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:25.142 16:00:43 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:25.142 00:06:25.142 real 0m0.210s 00:06:25.142 user 0m0.125s 00:06:25.142 sys 0m0.090s 00:06:25.142 16:00:43 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.142 16:00:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.142 ************************************ 00:06:25.142 END TEST json_config 00:06:25.142 ************************************ 00:06:25.401 16:00:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:25.401 16:00:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:25.401 16:00:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.401 16:00:43 -- common/autotest_common.sh@10 -- # set +x 00:06:25.401 ************************************ 00:06:25.401 START TEST json_config_extra_key 00:06:25.401 ************************************ 00:06:25.401 16:00:43 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:25.401 16:00:44 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:25.401 16:00:44 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:25.401 16:00:44 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:25.401 16:00:44 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.401 16:00:44 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:25.402 16:00:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:25.402 16:00:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.402 16:00:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:25.402 16:00:44 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.402 16:00:44 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:25.662 16:00:44 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:25.662 16:00:44 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.662 16:00:44 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:25.662 16:00:44 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.662 16:00:44 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.662 16:00:44 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.662 16:00:44 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:25.662 16:00:44 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.662 16:00:44 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:25.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.662 --rc genhtml_branch_coverage=1 00:06:25.662 --rc genhtml_function_coverage=1 00:06:25.662 --rc genhtml_legend=1 00:06:25.662 --rc geninfo_all_blocks=1 00:06:25.662 --rc geninfo_unexecuted_blocks=1 00:06:25.662 00:06:25.662 ' 00:06:25.662 16:00:44 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:25.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.662 --rc genhtml_branch_coverage=1 00:06:25.662 --rc genhtml_function_coverage=1 00:06:25.662 --rc genhtml_legend=1 00:06:25.662 --rc geninfo_all_blocks=1 00:06:25.662 --rc geninfo_unexecuted_blocks=1 00:06:25.662 00:06:25.662 ' 00:06:25.662 16:00:44 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:25.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.662 --rc genhtml_branch_coverage=1 00:06:25.662 --rc genhtml_function_coverage=1 00:06:25.662 --rc genhtml_legend=1 00:06:25.662 --rc geninfo_all_blocks=1 00:06:25.662 --rc geninfo_unexecuted_blocks=1 00:06:25.662 00:06:25.662 ' 00:06:25.662 16:00:44 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:25.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.662 --rc genhtml_branch_coverage=1 00:06:25.662 --rc genhtml_function_coverage=1 00:06:25.662 --rc genhtml_legend=1 00:06:25.662 --rc geninfo_all_blocks=1 00:06:25.662 --rc geninfo_unexecuted_blocks=1 00:06:25.662 00:06:25.662 ' 00:06:25.662 16:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b307c85-9e07-4f18-80b6-51adc42f99df 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2b307c85-9e07-4f18-80b6-51adc42f99df 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.662 16:00:44 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.662 16:00:44 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.662 16:00:44 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.662 16:00:44 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.662 16:00:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.662 16:00:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.662 16:00:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.662 16:00:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:25.662 16:00:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.662 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.662 16:00:44 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.662 16:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:25.662 16:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:25.662 16:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:25.662 16:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:25.662 16:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:25.662 16:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:25.662 16:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:25.662 16:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:25.662 INFO: launching applications... 00:06:25.663 16:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:25.663 16:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:25.663 16:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:25.663 16:00:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:25.663 16:00:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:25.663 16:00:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:25.663 16:00:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.663 16:00:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.663 16:00:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.663 16:00:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.663 16:00:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.663 16:00:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58551 00:06:25.663 Waiting for target to run... 00:06:25.663 16:00:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.663 16:00:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58551 /var/tmp/spdk_tgt.sock 00:06:25.663 16:00:44 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 58551 ']' 00:06:25.663 16:00:44 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:25.663 16:00:44 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.663 16:00:44 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.663 16:00:44 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.663 16:00:44 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.663 16:00:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:25.663 [2024-11-04 16:00:44.276264] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:06:25.663 [2024-11-04 16:00:44.276409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58551 ] 00:06:26.231 [2024-11-04 16:00:44.764269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.231 [2024-11-04 16:00:44.888315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.169 16:00:45 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.169 00:06:27.169 16:00:45 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:27.169 16:00:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:27.169 INFO: shutting down applications... 00:06:27.169 16:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:27.169 16:00:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:27.169 16:00:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:27.169 16:00:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:27.169 16:00:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58551 ]] 00:06:27.169 16:00:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58551 00:06:27.169 16:00:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:27.169 16:00:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.169 16:00:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58551 00:06:27.169 16:00:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.449 16:00:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.449 16:00:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.449 16:00:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58551 00:06:27.449 16:00:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:28.018 16:00:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:28.018 16:00:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.018 16:00:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58551 00:06:28.018 16:00:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:28.586 16:00:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:28.586 16:00:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.586 16:00:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58551 00:06:28.586 16:00:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.182 16:00:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.182 16:00:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.182 16:00:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58551 00:06:29.182 16:00:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.752 16:00:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.752 16:00:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.752 16:00:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58551 00:06:29.752 16:00:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:30.011 16:00:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:30.011 16:00:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.011 16:00:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58551 00:06:30.011 16:00:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:30.581 16:00:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:30.581 16:00:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.581 16:00:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58551 00:06:30.581 16:00:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:30.581 16:00:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:30.581 16:00:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:30.581 SPDK target shutdown done 00:06:30.581 Success 00:06:30.581 16:00:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:30.581 16:00:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:30.581 00:06:30.581 real 0m5.273s 00:06:30.581 user 0m4.436s 00:06:30.581 sys 0m0.747s 00:06:30.581 16:00:49 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:30.581 16:00:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:30.581 ************************************ 00:06:30.581 END TEST json_config_extra_key 00:06:30.581 ************************************ 00:06:30.581 16:00:49 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.581 16:00:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:30.581 16:00:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.581 16:00:49 -- common/autotest_common.sh@10 -- # set +x 00:06:30.581 ************************************ 00:06:30.581 START TEST alias_rpc 00:06:30.581 ************************************ 00:06:30.581 16:00:49 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:30.841 * Looking for test storage... 00:06:30.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:30.841 16:00:49 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:30.841 16:00:49 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:30.841 16:00:49 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:30.841 16:00:49 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.841 16:00:49 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.842 16:00:49 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.842 16:00:49 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:30.842 16:00:49 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.842 16:00:49 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:30.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.842 --rc genhtml_branch_coverage=1 00:06:30.842 --rc genhtml_function_coverage=1 00:06:30.842 --rc genhtml_legend=1 00:06:30.842 --rc geninfo_all_blocks=1 00:06:30.842 --rc geninfo_unexecuted_blocks=1 00:06:30.842 00:06:30.842 ' 00:06:30.842 16:00:49 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:30.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.842 --rc genhtml_branch_coverage=1 00:06:30.842 --rc genhtml_function_coverage=1 00:06:30.842 --rc genhtml_legend=1 00:06:30.842 --rc geninfo_all_blocks=1 00:06:30.842 --rc geninfo_unexecuted_blocks=1 00:06:30.842 00:06:30.842 ' 00:06:30.842 16:00:49 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:30.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.842 --rc genhtml_branch_coverage=1 00:06:30.842 --rc genhtml_function_coverage=1 00:06:30.842 --rc genhtml_legend=1 00:06:30.842 --rc geninfo_all_blocks=1 00:06:30.842 --rc geninfo_unexecuted_blocks=1 00:06:30.842 00:06:30.842 ' 00:06:30.842 16:00:49 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:30.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.842 --rc genhtml_branch_coverage=1 00:06:30.842 --rc genhtml_function_coverage=1 00:06:30.842 --rc genhtml_legend=1 00:06:30.842 --rc geninfo_all_blocks=1 00:06:30.842 --rc geninfo_unexecuted_blocks=1 00:06:30.842 00:06:30.842 ' 00:06:30.842 16:00:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:30.842 16:00:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58669 00:06:30.842 16:00:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58669 00:06:30.842 16:00:49 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 58669 ']' 00:06:30.842 16:00:49 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.842 16:00:49 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.842 16:00:49 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.842 16:00:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.842 16:00:49 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.842 16:00:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.107 [2024-11-04 16:00:49.570424] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:06:31.107 [2024-11-04 16:00:49.570605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58669 ] 00:06:31.107 [2024-11-04 16:00:49.755276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.368 [2024-11-04 16:00:49.901562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.306 16:00:50 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:32.306 16:00:50 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:32.306 16:00:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:32.565 16:00:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58669 00:06:32.565 16:00:51 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 58669 ']' 00:06:32.565 16:00:51 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 58669 00:06:32.565 16:00:51 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:32.566 16:00:51 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:32.566 16:00:51 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58669 00:06:32.566 16:00:51 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:32.566 killing process with pid 58669 00:06:32.566 16:00:51 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:32.566 16:00:51 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58669' 00:06:32.566 16:00:51 alias_rpc -- common/autotest_common.sh@971 -- # kill 58669 00:06:32.566 16:00:51 alias_rpc -- common/autotest_common.sh@976 -- # wait 58669 00:06:35.856 00:06:35.856 real 0m4.619s 00:06:35.856 user 0m4.471s 00:06:35.856 sys 0m0.759s 00:06:35.856 16:00:53 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.856 16:00:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.856 ************************************ 00:06:35.856 END TEST alias_rpc 00:06:35.856 ************************************ 00:06:35.856 16:00:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:35.856 16:00:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:35.856 16:00:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:35.856 16:00:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.856 16:00:53 -- common/autotest_common.sh@10 -- # set +x 00:06:35.856 ************************************ 00:06:35.856 START TEST spdkcli_tcp 00:06:35.856 ************************************ 00:06:35.856 16:00:53 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:35.856 * Looking for test storage... 00:06:35.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:35.856 16:00:54 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:35.856 16:00:54 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:35.856 16:00:54 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:35.856 16:00:54 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:35.856 16:00:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:35.857 16:00:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.857 16:00:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:35.857 16:00:54 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.857 16:00:54 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:35.857 16:00:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:35.857 16:00:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.857 16:00:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:35.857 16:00:54 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.857 16:00:54 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.857 16:00:54 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.857 16:00:54 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:35.857 16:00:54 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.857 16:00:54 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:35.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.857 --rc genhtml_branch_coverage=1 00:06:35.857 --rc genhtml_function_coverage=1 00:06:35.857 --rc genhtml_legend=1 00:06:35.857 --rc geninfo_all_blocks=1 00:06:35.857 --rc geninfo_unexecuted_blocks=1 00:06:35.857 00:06:35.857 ' 00:06:35.857 16:00:54 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:35.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.857 --rc genhtml_branch_coverage=1 00:06:35.857 --rc genhtml_function_coverage=1 00:06:35.857 --rc genhtml_legend=1 00:06:35.857 --rc geninfo_all_blocks=1 00:06:35.857 --rc geninfo_unexecuted_blocks=1 00:06:35.857 00:06:35.857 ' 00:06:35.857 16:00:54 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:35.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.857 --rc genhtml_branch_coverage=1 00:06:35.857 --rc genhtml_function_coverage=1 00:06:35.857 --rc genhtml_legend=1 00:06:35.857 --rc geninfo_all_blocks=1 00:06:35.857 --rc geninfo_unexecuted_blocks=1 00:06:35.857 00:06:35.857 ' 00:06:35.857 16:00:54 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:35.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.857 --rc genhtml_branch_coverage=1 00:06:35.857 --rc genhtml_function_coverage=1 00:06:35.857 --rc genhtml_legend=1 00:06:35.857 --rc geninfo_all_blocks=1 00:06:35.857 --rc geninfo_unexecuted_blocks=1 00:06:35.857 00:06:35.857 ' 00:06:35.857 16:00:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:35.857 16:00:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:35.857 16:00:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:35.857 16:00:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:35.857 16:00:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:35.857 16:00:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:35.857 16:00:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:35.857 16:00:54 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:35.857 16:00:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.857 16:00:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58782 00:06:35.857 16:00:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:35.857 16:00:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58782 00:06:35.857 16:00:54 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58782 ']' 00:06:35.857 16:00:54 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.857 16:00:54 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:35.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.857 16:00:54 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.857 16:00:54 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:35.857 16:00:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.857 [2024-11-04 16:00:54.294443] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:06:35.857 [2024-11-04 16:00:54.295032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58782 ] 00:06:35.857 [2024-11-04 16:00:54.479520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.116 [2024-11-04 16:00:54.620385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.116 [2024-11-04 16:00:54.620404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.060 16:00:55 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:37.060 16:00:55 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:37.060 16:00:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58807 00:06:37.060 16:00:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:37.060 16:00:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:37.320 [ 00:06:37.320 "bdev_malloc_delete", 00:06:37.320 "bdev_malloc_create", 00:06:37.320 "bdev_null_resize", 00:06:37.320 "bdev_null_delete", 00:06:37.320 "bdev_null_create", 00:06:37.320 "bdev_nvme_cuse_unregister", 00:06:37.320 "bdev_nvme_cuse_register", 00:06:37.320 "bdev_opal_new_user", 00:06:37.320 "bdev_opal_set_lock_state", 00:06:37.320 "bdev_opal_delete", 00:06:37.320 "bdev_opal_get_info", 00:06:37.320 "bdev_opal_create", 00:06:37.320 "bdev_nvme_opal_revert", 00:06:37.320 "bdev_nvme_opal_init", 00:06:37.320 "bdev_nvme_send_cmd", 00:06:37.320 "bdev_nvme_set_keys", 00:06:37.320 "bdev_nvme_get_path_iostat", 00:06:37.320 "bdev_nvme_get_mdns_discovery_info", 00:06:37.320 "bdev_nvme_stop_mdns_discovery", 00:06:37.320 "bdev_nvme_start_mdns_discovery", 00:06:37.320 "bdev_nvme_set_multipath_policy", 00:06:37.320 "bdev_nvme_set_preferred_path", 00:06:37.320 "bdev_nvme_get_io_paths", 00:06:37.320 "bdev_nvme_remove_error_injection", 00:06:37.320 "bdev_nvme_add_error_injection", 00:06:37.320 "bdev_nvme_get_discovery_info", 00:06:37.320 "bdev_nvme_stop_discovery", 00:06:37.320 "bdev_nvme_start_discovery", 00:06:37.320 "bdev_nvme_get_controller_health_info", 00:06:37.320 "bdev_nvme_disable_controller", 00:06:37.320 "bdev_nvme_enable_controller", 00:06:37.320 "bdev_nvme_reset_controller", 00:06:37.320 "bdev_nvme_get_transport_statistics", 00:06:37.320 "bdev_nvme_apply_firmware", 00:06:37.320 "bdev_nvme_detach_controller", 00:06:37.320 "bdev_nvme_get_controllers", 00:06:37.320 "bdev_nvme_attach_controller", 00:06:37.320 "bdev_nvme_set_hotplug", 00:06:37.320 "bdev_nvme_set_options", 00:06:37.320 "bdev_passthru_delete", 00:06:37.320 "bdev_passthru_create", 00:06:37.320 "bdev_lvol_set_parent_bdev", 00:06:37.320 "bdev_lvol_set_parent", 00:06:37.320 "bdev_lvol_check_shallow_copy", 00:06:37.320 "bdev_lvol_start_shallow_copy", 00:06:37.320 "bdev_lvol_grow_lvstore", 00:06:37.320 "bdev_lvol_get_lvols", 00:06:37.320 "bdev_lvol_get_lvstores", 00:06:37.320 "bdev_lvol_delete", 00:06:37.320 "bdev_lvol_set_read_only", 00:06:37.320 "bdev_lvol_resize", 00:06:37.320 "bdev_lvol_decouple_parent", 00:06:37.320 "bdev_lvol_inflate", 00:06:37.320 "bdev_lvol_rename", 00:06:37.320 "bdev_lvol_clone_bdev", 00:06:37.320 "bdev_lvol_clone", 00:06:37.320 "bdev_lvol_snapshot", 00:06:37.320 "bdev_lvol_create", 00:06:37.320 "bdev_lvol_delete_lvstore", 00:06:37.320 "bdev_lvol_rename_lvstore", 00:06:37.320 "bdev_lvol_create_lvstore", 00:06:37.320 "bdev_raid_set_options", 00:06:37.320 "bdev_raid_remove_base_bdev", 00:06:37.320 "bdev_raid_add_base_bdev", 00:06:37.320 "bdev_raid_delete", 00:06:37.320 "bdev_raid_create", 00:06:37.320 "bdev_raid_get_bdevs", 00:06:37.320 "bdev_error_inject_error", 00:06:37.320 "bdev_error_delete", 00:06:37.320 "bdev_error_create", 00:06:37.320 "bdev_split_delete", 00:06:37.320 "bdev_split_create", 00:06:37.320 "bdev_delay_delete", 00:06:37.320 "bdev_delay_create", 00:06:37.320 "bdev_delay_update_latency", 00:06:37.320 "bdev_zone_block_delete", 00:06:37.320 "bdev_zone_block_create", 00:06:37.320 "blobfs_create", 00:06:37.320 "blobfs_detect", 00:06:37.320 "blobfs_set_cache_size", 00:06:37.320 "bdev_xnvme_delete", 00:06:37.320 "bdev_xnvme_create", 00:06:37.320 "bdev_aio_delete", 00:06:37.320 "bdev_aio_rescan", 00:06:37.320 "bdev_aio_create", 00:06:37.320 "bdev_ftl_set_property", 00:06:37.320 "bdev_ftl_get_properties", 00:06:37.320 "bdev_ftl_get_stats", 00:06:37.320 "bdev_ftl_unmap", 00:06:37.320 "bdev_ftl_unload", 00:06:37.320 "bdev_ftl_delete", 00:06:37.320 "bdev_ftl_load", 00:06:37.320 "bdev_ftl_create", 00:06:37.320 "bdev_virtio_attach_controller", 00:06:37.320 "bdev_virtio_scsi_get_devices", 00:06:37.320 "bdev_virtio_detach_controller", 00:06:37.320 "bdev_virtio_blk_set_hotplug", 00:06:37.320 "bdev_iscsi_delete", 00:06:37.320 "bdev_iscsi_create", 00:06:37.320 "bdev_iscsi_set_options", 00:06:37.320 "accel_error_inject_error", 00:06:37.320 "ioat_scan_accel_module", 00:06:37.320 "dsa_scan_accel_module", 00:06:37.320 "iaa_scan_accel_module", 00:06:37.320 "keyring_file_remove_key", 00:06:37.320 "keyring_file_add_key", 00:06:37.320 "keyring_linux_set_options", 00:06:37.320 "fsdev_aio_delete", 00:06:37.320 "fsdev_aio_create", 00:06:37.320 "iscsi_get_histogram", 00:06:37.320 "iscsi_enable_histogram", 00:06:37.320 "iscsi_set_options", 00:06:37.320 "iscsi_get_auth_groups", 00:06:37.320 "iscsi_auth_group_remove_secret", 00:06:37.320 "iscsi_auth_group_add_secret", 00:06:37.320 "iscsi_delete_auth_group", 00:06:37.320 "iscsi_create_auth_group", 00:06:37.320 "iscsi_set_discovery_auth", 00:06:37.320 "iscsi_get_options", 00:06:37.320 "iscsi_target_node_request_logout", 00:06:37.320 "iscsi_target_node_set_redirect", 00:06:37.320 "iscsi_target_node_set_auth", 00:06:37.320 "iscsi_target_node_add_lun", 00:06:37.320 "iscsi_get_stats", 00:06:37.320 "iscsi_get_connections", 00:06:37.320 "iscsi_portal_group_set_auth", 00:06:37.320 "iscsi_start_portal_group", 00:06:37.320 "iscsi_delete_portal_group", 00:06:37.320 "iscsi_create_portal_group", 00:06:37.320 "iscsi_get_portal_groups", 00:06:37.320 "iscsi_delete_target_node", 00:06:37.320 "iscsi_target_node_remove_pg_ig_maps", 00:06:37.320 "iscsi_target_node_add_pg_ig_maps", 00:06:37.320 "iscsi_create_target_node", 00:06:37.320 "iscsi_get_target_nodes", 00:06:37.320 "iscsi_delete_initiator_group", 00:06:37.320 "iscsi_initiator_group_remove_initiators", 00:06:37.320 "iscsi_initiator_group_add_initiators", 00:06:37.320 "iscsi_create_initiator_group", 00:06:37.320 "iscsi_get_initiator_groups", 00:06:37.320 "nvmf_set_crdt", 00:06:37.320 "nvmf_set_config", 00:06:37.320 "nvmf_set_max_subsystems", 00:06:37.320 "nvmf_stop_mdns_prr", 00:06:37.320 "nvmf_publish_mdns_prr", 00:06:37.320 "nvmf_subsystem_get_listeners", 00:06:37.320 "nvmf_subsystem_get_qpairs", 00:06:37.320 "nvmf_subsystem_get_controllers", 00:06:37.320 "nvmf_get_stats", 00:06:37.320 "nvmf_get_transports", 00:06:37.320 "nvmf_create_transport", 00:06:37.320 "nvmf_get_targets", 00:06:37.320 "nvmf_delete_target", 00:06:37.320 "nvmf_create_target", 00:06:37.320 "nvmf_subsystem_allow_any_host", 00:06:37.320 "nvmf_subsystem_set_keys", 00:06:37.320 "nvmf_subsystem_remove_host", 00:06:37.320 "nvmf_subsystem_add_host", 00:06:37.320 "nvmf_ns_remove_host", 00:06:37.320 "nvmf_ns_add_host", 00:06:37.320 "nvmf_subsystem_remove_ns", 00:06:37.320 "nvmf_subsystem_set_ns_ana_group", 00:06:37.320 "nvmf_subsystem_add_ns", 00:06:37.320 "nvmf_subsystem_listener_set_ana_state", 00:06:37.320 "nvmf_discovery_get_referrals", 00:06:37.320 "nvmf_discovery_remove_referral", 00:06:37.320 "nvmf_discovery_add_referral", 00:06:37.320 "nvmf_subsystem_remove_listener", 00:06:37.320 "nvmf_subsystem_add_listener", 00:06:37.320 "nvmf_delete_subsystem", 00:06:37.320 "nvmf_create_subsystem", 00:06:37.320 "nvmf_get_subsystems", 00:06:37.320 "env_dpdk_get_mem_stats", 00:06:37.320 "nbd_get_disks", 00:06:37.320 "nbd_stop_disk", 00:06:37.320 "nbd_start_disk", 00:06:37.320 "ublk_recover_disk", 00:06:37.320 "ublk_get_disks", 00:06:37.320 "ublk_stop_disk", 00:06:37.320 "ublk_start_disk", 00:06:37.320 "ublk_destroy_target", 00:06:37.320 "ublk_create_target", 00:06:37.320 "virtio_blk_create_transport", 00:06:37.320 "virtio_blk_get_transports", 00:06:37.320 "vhost_controller_set_coalescing", 00:06:37.320 "vhost_get_controllers", 00:06:37.320 "vhost_delete_controller", 00:06:37.320 "vhost_create_blk_controller", 00:06:37.320 "vhost_scsi_controller_remove_target", 00:06:37.320 "vhost_scsi_controller_add_target", 00:06:37.320 "vhost_start_scsi_controller", 00:06:37.320 "vhost_create_scsi_controller", 00:06:37.320 "thread_set_cpumask", 00:06:37.320 "scheduler_set_options", 00:06:37.320 "framework_get_governor", 00:06:37.320 "framework_get_scheduler", 00:06:37.320 "framework_set_scheduler", 00:06:37.320 "framework_get_reactors", 00:06:37.320 "thread_get_io_channels", 00:06:37.320 "thread_get_pollers", 00:06:37.320 "thread_get_stats", 00:06:37.320 "framework_monitor_context_switch", 00:06:37.320 "spdk_kill_instance", 00:06:37.320 "log_enable_timestamps", 00:06:37.320 "log_get_flags", 00:06:37.320 "log_clear_flag", 00:06:37.320 "log_set_flag", 00:06:37.320 "log_get_level", 00:06:37.320 "log_set_level", 00:06:37.320 "log_get_print_level", 00:06:37.320 "log_set_print_level", 00:06:37.320 "framework_enable_cpumask_locks", 00:06:37.320 "framework_disable_cpumask_locks", 00:06:37.320 "framework_wait_init", 00:06:37.320 "framework_start_init", 00:06:37.320 "scsi_get_devices", 00:06:37.320 "bdev_get_histogram", 00:06:37.320 "bdev_enable_histogram", 00:06:37.320 "bdev_set_qos_limit", 00:06:37.320 "bdev_set_qd_sampling_period", 00:06:37.320 "bdev_get_bdevs", 00:06:37.320 "bdev_reset_iostat", 00:06:37.321 "bdev_get_iostat", 00:06:37.321 "bdev_examine", 00:06:37.321 "bdev_wait_for_examine", 00:06:37.321 "bdev_set_options", 00:06:37.321 "accel_get_stats", 00:06:37.321 "accel_set_options", 00:06:37.321 "accel_set_driver", 00:06:37.321 "accel_crypto_key_destroy", 00:06:37.321 "accel_crypto_keys_get", 00:06:37.321 "accel_crypto_key_create", 00:06:37.321 "accel_assign_opc", 00:06:37.321 "accel_get_module_info", 00:06:37.321 "accel_get_opc_assignments", 00:06:37.321 "vmd_rescan", 00:06:37.321 "vmd_remove_device", 00:06:37.321 "vmd_enable", 00:06:37.321 "sock_get_default_impl", 00:06:37.321 "sock_set_default_impl", 00:06:37.321 "sock_impl_set_options", 00:06:37.321 "sock_impl_get_options", 00:06:37.321 "iobuf_get_stats", 00:06:37.321 "iobuf_set_options", 00:06:37.321 "keyring_get_keys", 00:06:37.321 "framework_get_pci_devices", 00:06:37.321 "framework_get_config", 00:06:37.321 "framework_get_subsystems", 00:06:37.321 "fsdev_set_opts", 00:06:37.321 "fsdev_get_opts", 00:06:37.321 "trace_get_info", 00:06:37.321 "trace_get_tpoint_group_mask", 00:06:37.321 "trace_disable_tpoint_group", 00:06:37.321 "trace_enable_tpoint_group", 00:06:37.321 "trace_clear_tpoint_mask", 00:06:37.321 "trace_set_tpoint_mask", 00:06:37.321 "notify_get_notifications", 00:06:37.321 "notify_get_types", 00:06:37.321 "spdk_get_version", 00:06:37.321 "rpc_get_methods" 00:06:37.321 ] 00:06:37.321 16:00:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:37.321 16:00:55 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.321 16:00:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.321 16:00:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:37.321 16:00:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58782 00:06:37.321 16:00:55 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58782 ']' 00:06:37.321 16:00:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58782 00:06:37.321 16:00:55 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:37.321 16:00:55 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:37.321 16:00:55 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58782 00:06:37.321 16:00:55 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:37.321 16:00:55 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:37.321 16:00:55 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58782' 00:06:37.321 killing process with pid 58782 00:06:37.321 16:00:55 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58782 00:06:37.321 16:00:55 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58782 00:06:40.607 00:06:40.607 real 0m4.675s 00:06:40.607 user 0m8.201s 00:06:40.607 sys 0m0.839s 00:06:40.607 16:00:58 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:40.607 ************************************ 00:06:40.607 END TEST spdkcli_tcp 00:06:40.607 16:00:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.607 ************************************ 00:06:40.607 16:00:58 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.607 16:00:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:40.607 16:00:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.607 16:00:58 -- common/autotest_common.sh@10 -- # set +x 00:06:40.607 ************************************ 00:06:40.607 START TEST dpdk_mem_utility 00:06:40.607 ************************************ 00:06:40.607 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.607 * Looking for test storage... 00:06:40.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:40.607 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:40.607 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:40.607 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:40.607 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.607 16:00:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.608 16:00:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:40.608 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.608 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.608 --rc genhtml_branch_coverage=1 00:06:40.608 --rc genhtml_function_coverage=1 00:06:40.608 --rc genhtml_legend=1 00:06:40.608 --rc geninfo_all_blocks=1 00:06:40.608 --rc geninfo_unexecuted_blocks=1 00:06:40.608 00:06:40.608 ' 00:06:40.608 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.608 --rc genhtml_branch_coverage=1 00:06:40.608 --rc genhtml_function_coverage=1 00:06:40.608 --rc genhtml_legend=1 00:06:40.608 --rc geninfo_all_blocks=1 00:06:40.608 --rc geninfo_unexecuted_blocks=1 00:06:40.608 00:06:40.608 ' 00:06:40.608 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.608 --rc genhtml_branch_coverage=1 00:06:40.608 --rc genhtml_function_coverage=1 00:06:40.608 --rc genhtml_legend=1 00:06:40.608 --rc geninfo_all_blocks=1 00:06:40.608 --rc geninfo_unexecuted_blocks=1 00:06:40.608 00:06:40.608 ' 00:06:40.608 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.608 --rc genhtml_branch_coverage=1 00:06:40.608 --rc genhtml_function_coverage=1 00:06:40.608 --rc genhtml_legend=1 00:06:40.608 --rc geninfo_all_blocks=1 00:06:40.608 --rc geninfo_unexecuted_blocks=1 00:06:40.608 00:06:40.608 ' 00:06:40.608 16:00:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:40.608 16:00:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58912 00:06:40.608 16:00:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.608 16:00:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58912 00:06:40.608 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58912 ']' 00:06:40.608 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.608 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:40.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.608 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.608 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:40.608 16:00:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.608 [2024-11-04 16:00:59.070017] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:06:40.608 [2024-11-04 16:00:59.070198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58912 ] 00:06:40.608 [2024-11-04 16:00:59.256447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.867 [2024-11-04 16:00:59.395034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.803 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:41.803 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:41.803 16:01:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:41.803 16:01:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:41.803 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.803 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.803 { 00:06:41.803 "filename": "/tmp/spdk_mem_dump.txt" 00:06:41.803 } 00:06:41.803 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.803 16:01:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:41.803 DPDK memory size 816.000000 MiB in 1 heap(s) 00:06:41.803 1 heaps totaling size 816.000000 MiB 00:06:41.803 size: 816.000000 MiB heap id: 0 00:06:41.803 end heaps---------- 00:06:41.803 9 mempools totaling size 595.772034 MiB 00:06:41.803 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:41.803 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:41.803 size: 92.545471 MiB name: bdev_io_58912 00:06:41.803 size: 50.003479 MiB name: msgpool_58912 00:06:41.803 size: 36.509338 MiB name: fsdev_io_58912 00:06:41.803 size: 21.763794 MiB name: PDU_Pool 00:06:41.803 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:41.804 size: 4.133484 MiB name: evtpool_58912 00:06:41.804 size: 0.026123 MiB name: Session_Pool 00:06:41.804 end mempools------- 00:06:41.804 6 memzones totaling size 4.142822 MiB 00:06:41.804 size: 1.000366 MiB name: RG_ring_0_58912 00:06:41.804 size: 1.000366 MiB name: RG_ring_1_58912 00:06:41.804 size: 1.000366 MiB name: RG_ring_4_58912 00:06:41.804 size: 1.000366 MiB name: RG_ring_5_58912 00:06:41.804 size: 0.125366 MiB name: RG_ring_2_58912 00:06:41.804 size: 0.015991 MiB name: RG_ring_3_58912 00:06:41.804 end memzones------- 00:06:41.804 16:01:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:42.065 heap id: 0 total size: 816.000000 MiB number of busy elements: 316 number of free elements: 18 00:06:42.065 list of free elements. size: 16.791138 MiB 00:06:42.065 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:42.065 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:42.065 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:42.065 element at address: 0x200018d00040 with size: 0.999939 MiB 00:06:42.065 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:42.065 element at address: 0x200019200000 with size: 0.999084 MiB 00:06:42.065 element at address: 0x200031e00000 with size: 0.994324 MiB 00:06:42.065 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:42.065 element at address: 0x200018a00000 with size: 0.959656 MiB 00:06:42.065 element at address: 0x200019500040 with size: 0.936401 MiB 00:06:42.065 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:42.065 element at address: 0x20001ac00000 with size: 0.561462 MiB 00:06:42.065 element at address: 0x200000c00000 with size: 0.490173 MiB 00:06:42.065 element at address: 0x200018e00000 with size: 0.487976 MiB 00:06:42.065 element at address: 0x200019600000 with size: 0.485413 MiB 00:06:42.065 element at address: 0x200012c00000 with size: 0.443481 MiB 00:06:42.065 element at address: 0x200028000000 with size: 0.390442 MiB 00:06:42.065 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:42.065 list of standard malloc elements. size: 199.287964 MiB 00:06:42.065 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:42.065 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:42.065 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:06:42.065 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:42.065 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:42.065 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:42.065 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:06:42.065 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:42.065 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:42.065 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:06:42.065 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:42.065 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:42.065 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:42.065 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:42.065 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012c71880 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012c71980 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012c72080 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012c72180 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:42.066 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:06:42.066 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:06:42.066 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:06:42.067 element at address: 0x200028063f40 with size: 0.000244 MiB 00:06:42.067 element at address: 0x200028064040 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806af80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806b080 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806b180 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806b280 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806b380 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806b480 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806b580 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806b680 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806b780 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806b880 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806b980 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806be80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806c080 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806c180 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806c280 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806c380 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806c480 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806c580 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806c680 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806c780 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806c880 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806c980 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806d080 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806d180 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806d280 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806d380 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806d480 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806d580 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806d680 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806d780 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806d880 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806d980 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806da80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806db80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806de80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806df80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806e080 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806e180 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806e280 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806e380 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806e480 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806e580 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806e680 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806e780 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806e880 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806e980 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806f080 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806f180 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806f280 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806f380 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806f480 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806f580 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806f680 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806f780 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806f880 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806f980 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:06:42.067 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:06:42.067 list of memzone associated elements. size: 599.920898 MiB 00:06:42.067 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:06:42.067 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:42.067 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:06:42.067 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:42.067 element at address: 0x200012df4740 with size: 92.045105 MiB 00:06:42.067 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58912_0 00:06:42.067 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:42.067 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58912_0 00:06:42.067 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:42.067 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58912_0 00:06:42.067 element at address: 0x2000197be900 with size: 20.255615 MiB 00:06:42.067 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:42.067 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:06:42.067 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:42.067 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:42.067 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58912_0 00:06:42.067 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:42.067 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58912 00:06:42.067 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:42.067 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58912 00:06:42.067 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:42.067 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:42.067 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:06:42.067 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:42.067 element at address: 0x200018afde00 with size: 1.008179 MiB 00:06:42.067 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:42.067 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:06:42.067 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:42.067 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:42.067 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58912 00:06:42.067 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:42.067 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58912 00:06:42.067 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:06:42.067 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58912 00:06:42.067 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:06:42.067 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58912 00:06:42.067 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:42.067 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58912 00:06:42.067 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:42.067 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58912 00:06:42.067 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:06:42.067 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:42.067 element at address: 0x200012c72280 with size: 0.500549 MiB 00:06:42.067 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:42.067 element at address: 0x20001967c440 with size: 0.250549 MiB 00:06:42.067 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:42.067 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:42.067 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58912 00:06:42.067 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:42.067 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58912 00:06:42.067 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:06:42.067 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:42.067 element at address: 0x200028064140 with size: 0.023804 MiB 00:06:42.067 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:42.067 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:42.067 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58912 00:06:42.067 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:06:42.067 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:42.067 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:42.068 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58912 00:06:42.068 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:42.068 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58912 00:06:42.068 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:42.068 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58912 00:06:42.068 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:06:42.068 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:42.068 16:01:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:42.068 16:01:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58912 00:06:42.068 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58912 ']' 00:06:42.068 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58912 00:06:42.068 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:42.068 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:42.068 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58912 00:06:42.068 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:42.068 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:42.068 killing process with pid 58912 00:06:42.068 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58912' 00:06:42.068 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58912 00:06:42.068 16:01:00 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58912 00:06:44.603 00:06:44.603 real 0m4.545s 00:06:44.603 user 0m4.244s 00:06:44.603 sys 0m0.788s 00:06:44.603 16:01:03 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.603 16:01:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:44.603 ************************************ 00:06:44.603 END TEST dpdk_mem_utility 00:06:44.603 ************************************ 00:06:44.603 16:01:03 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:44.603 16:01:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.603 16:01:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.603 16:01:03 -- common/autotest_common.sh@10 -- # set +x 00:06:44.603 ************************************ 00:06:44.603 START TEST event 00:06:44.603 ************************************ 00:06:44.603 16:01:03 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:44.862 * Looking for test storage... 00:06:44.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:44.862 16:01:03 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:44.862 16:01:03 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:44.862 16:01:03 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:44.862 16:01:03 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:44.862 16:01:03 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.862 16:01:03 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.862 16:01:03 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.862 16:01:03 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.862 16:01:03 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.862 16:01:03 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.862 16:01:03 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.862 16:01:03 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.862 16:01:03 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.862 16:01:03 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.862 16:01:03 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.862 16:01:03 event -- scripts/common.sh@344 -- # case "$op" in 00:06:44.862 16:01:03 event -- scripts/common.sh@345 -- # : 1 00:06:44.862 16:01:03 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.862 16:01:03 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.862 16:01:03 event -- scripts/common.sh@365 -- # decimal 1 00:06:44.862 16:01:03 event -- scripts/common.sh@353 -- # local d=1 00:06:44.862 16:01:03 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.862 16:01:03 event -- scripts/common.sh@355 -- # echo 1 00:06:44.862 16:01:03 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.862 16:01:03 event -- scripts/common.sh@366 -- # decimal 2 00:06:44.862 16:01:03 event -- scripts/common.sh@353 -- # local d=2 00:06:44.862 16:01:03 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.862 16:01:03 event -- scripts/common.sh@355 -- # echo 2 00:06:44.862 16:01:03 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.862 16:01:03 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.862 16:01:03 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.862 16:01:03 event -- scripts/common.sh@368 -- # return 0 00:06:44.862 16:01:03 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.862 16:01:03 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:44.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.862 --rc genhtml_branch_coverage=1 00:06:44.862 --rc genhtml_function_coverage=1 00:06:44.862 --rc genhtml_legend=1 00:06:44.862 --rc geninfo_all_blocks=1 00:06:44.862 --rc geninfo_unexecuted_blocks=1 00:06:44.862 00:06:44.862 ' 00:06:44.862 16:01:03 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:44.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.862 --rc genhtml_branch_coverage=1 00:06:44.862 --rc genhtml_function_coverage=1 00:06:44.862 --rc genhtml_legend=1 00:06:44.862 --rc geninfo_all_blocks=1 00:06:44.862 --rc geninfo_unexecuted_blocks=1 00:06:44.862 00:06:44.862 ' 00:06:44.862 16:01:03 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:44.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.862 --rc genhtml_branch_coverage=1 00:06:44.862 --rc genhtml_function_coverage=1 00:06:44.862 --rc genhtml_legend=1 00:06:44.862 --rc geninfo_all_blocks=1 00:06:44.862 --rc geninfo_unexecuted_blocks=1 00:06:44.862 00:06:44.862 ' 00:06:44.862 16:01:03 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:44.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.862 --rc genhtml_branch_coverage=1 00:06:44.862 --rc genhtml_function_coverage=1 00:06:44.862 --rc genhtml_legend=1 00:06:44.862 --rc geninfo_all_blocks=1 00:06:44.862 --rc geninfo_unexecuted_blocks=1 00:06:44.862 00:06:44.862 ' 00:06:44.862 16:01:03 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:44.862 16:01:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:44.862 16:01:03 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:44.862 16:01:03 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:44.862 16:01:03 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.862 16:01:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.862 ************************************ 00:06:44.862 START TEST event_perf 00:06:44.862 ************************************ 00:06:44.862 16:01:03 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:45.121 Running I/O for 1 seconds...[2024-11-04 16:01:03.611448] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:06:45.121 [2024-11-04 16:01:03.611583] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59026 ] 00:06:45.121 [2024-11-04 16:01:03.799868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.381 [2024-11-04 16:01:03.957491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.381 [2024-11-04 16:01:03.957559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.381 [2024-11-04 16:01:03.957738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.381 [2024-11-04 16:01:03.957795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.795 Running I/O for 1 seconds... 00:06:46.795 lcore 0: 105745 00:06:46.795 lcore 1: 105748 00:06:46.795 lcore 2: 105743 00:06:46.795 lcore 3: 105746 00:06:46.795 done. 00:06:46.795 00:06:46.795 real 0m1.668s 00:06:46.795 user 0m4.386s 00:06:46.795 sys 0m0.155s 00:06:46.795 16:01:05 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:46.795 16:01:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.795 ************************************ 00:06:46.795 END TEST event_perf 00:06:46.795 ************************************ 00:06:46.795 16:01:05 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:46.795 16:01:05 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:46.795 16:01:05 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:46.795 16:01:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.795 ************************************ 00:06:46.795 START TEST event_reactor 00:06:46.795 ************************************ 00:06:46.795 16:01:05 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:46.795 [2024-11-04 16:01:05.352036] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:06:46.795 [2024-11-04 16:01:05.352157] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59071 ] 00:06:47.054 [2024-11-04 16:01:05.538956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.054 [2024-11-04 16:01:05.686221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.432 test_start 00:06:48.432 oneshot 00:06:48.432 tick 100 00:06:48.432 tick 100 00:06:48.432 tick 250 00:06:48.432 tick 100 00:06:48.432 tick 100 00:06:48.432 tick 100 00:06:48.432 tick 250 00:06:48.432 tick 500 00:06:48.432 tick 100 00:06:48.432 tick 100 00:06:48.432 tick 250 00:06:48.432 tick 100 00:06:48.432 tick 100 00:06:48.432 test_end 00:06:48.432 00:06:48.432 real 0m1.632s 00:06:48.432 user 0m1.391s 00:06:48.432 sys 0m0.132s 00:06:48.432 16:01:06 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.432 16:01:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:48.432 ************************************ 00:06:48.432 END TEST event_reactor 00:06:48.432 ************************************ 00:06:48.432 16:01:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:48.432 16:01:06 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:48.432 16:01:06 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.432 16:01:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.432 ************************************ 00:06:48.432 START TEST event_reactor_perf 00:06:48.432 ************************************ 00:06:48.432 16:01:07 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:48.432 [2024-11-04 16:01:07.057310] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:06:48.432 [2024-11-04 16:01:07.057432] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59104 ] 00:06:48.691 [2024-11-04 16:01:07.241294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.691 [2024-11-04 16:01:07.387417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.067 test_start 00:06:50.067 test_end 00:06:50.067 Performance: 379004 events per second 00:06:50.067 00:06:50.067 real 0m1.629s 00:06:50.067 user 0m1.395s 00:06:50.067 sys 0m0.125s 00:06:50.067 16:01:08 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.067 16:01:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.067 ************************************ 00:06:50.067 END TEST event_reactor_perf 00:06:50.067 ************************************ 00:06:50.067 16:01:08 event -- event/event.sh@49 -- # uname -s 00:06:50.067 16:01:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:50.067 16:01:08 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:50.067 16:01:08 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.067 16:01:08 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.067 16:01:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.067 ************************************ 00:06:50.067 START TEST event_scheduler 00:06:50.067 ************************************ 00:06:50.067 16:01:08 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:50.326 * Looking for test storage... 00:06:50.326 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:50.326 16:01:08 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:50.326 16:01:08 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:50.326 16:01:08 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:50.326 16:01:08 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.326 16:01:08 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:50.326 16:01:08 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.326 16:01:08 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:50.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.326 --rc genhtml_branch_coverage=1 00:06:50.326 --rc genhtml_function_coverage=1 00:06:50.326 --rc genhtml_legend=1 00:06:50.326 --rc geninfo_all_blocks=1 00:06:50.326 --rc geninfo_unexecuted_blocks=1 00:06:50.326 00:06:50.326 ' 00:06:50.326 16:01:08 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:50.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.326 --rc genhtml_branch_coverage=1 00:06:50.326 --rc genhtml_function_coverage=1 00:06:50.326 --rc genhtml_legend=1 00:06:50.326 --rc geninfo_all_blocks=1 00:06:50.326 --rc geninfo_unexecuted_blocks=1 00:06:50.326 00:06:50.326 ' 00:06:50.326 16:01:08 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:50.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.326 --rc genhtml_branch_coverage=1 00:06:50.326 --rc genhtml_function_coverage=1 00:06:50.326 --rc genhtml_legend=1 00:06:50.326 --rc geninfo_all_blocks=1 00:06:50.326 --rc geninfo_unexecuted_blocks=1 00:06:50.326 00:06:50.326 ' 00:06:50.326 16:01:08 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:50.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.326 --rc genhtml_branch_coverage=1 00:06:50.326 --rc genhtml_function_coverage=1 00:06:50.326 --rc genhtml_legend=1 00:06:50.326 --rc geninfo_all_blocks=1 00:06:50.326 --rc geninfo_unexecuted_blocks=1 00:06:50.326 00:06:50.326 ' 00:06:50.326 16:01:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:50.326 16:01:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59178 00:06:50.326 16:01:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:50.326 16:01:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.326 16:01:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59178 00:06:50.326 16:01:08 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 59178 ']' 00:06:50.327 16:01:08 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.327 16:01:08 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.327 16:01:08 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.327 16:01:08 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.327 16:01:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.327 [2024-11-04 16:01:09.046017] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:06:50.327 [2024-11-04 16:01:09.046158] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59178 ] 00:06:50.585 [2024-11-04 16:01:09.218612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.844 [2024-11-04 16:01:09.371940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.844 [2024-11-04 16:01:09.372031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.844 [2024-11-04 16:01:09.372197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.844 [2024-11-04 16:01:09.372223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.413 16:01:09 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:51.413 16:01:09 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:51.413 16:01:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:51.413 16:01:09 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.413 16:01:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.413 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:51.413 POWER: Cannot set governor of lcore 0 to userspace 00:06:51.413 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:51.413 POWER: Cannot set governor of lcore 0 to performance 00:06:51.413 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:51.413 POWER: Cannot set governor of lcore 0 to userspace 00:06:51.413 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:51.413 POWER: Cannot set governor of lcore 0 to userspace 00:06:51.413 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:51.413 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:51.413 POWER: Unable to set Power Management Environment for lcore 0 00:06:51.413 [2024-11-04 16:01:09.905934] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:51.413 [2024-11-04 16:01:09.905965] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:51.413 [2024-11-04 16:01:09.905979] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:51.413 [2024-11-04 16:01:09.906001] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:51.413 [2024-11-04 16:01:09.906013] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:51.413 [2024-11-04 16:01:09.906026] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:51.413 16:01:09 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.413 16:01:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:51.413 16:01:09 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.413 16:01:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 [2024-11-04 16:01:10.240308] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:51.673 16:01:10 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.673 16:01:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:51.673 16:01:10 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:51.673 16:01:10 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:51.673 16:01:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 ************************************ 00:06:51.673 START TEST scheduler_create_thread 00:06:51.673 ************************************ 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 2 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 3 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 4 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 5 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 6 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 7 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 8 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 9 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 10 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.673 16:01:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.610 16:01:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.610 16:01:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:52.610 16:01:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.610 16:01:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.988 16:01:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.988 16:01:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:53.988 16:01:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:53.988 16:01:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.988 16:01:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.921 ************************************ 00:06:54.921 END TEST scheduler_create_thread 00:06:54.921 ************************************ 00:06:54.921 16:01:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.921 00:06:54.921 real 0m3.379s 00:06:54.921 user 0m0.026s 00:06:54.921 sys 0m0.004s 00:06:54.921 16:01:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.921 16:01:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.180 16:01:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:55.180 16:01:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59178 00:06:55.180 16:01:13 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 59178 ']' 00:06:55.180 16:01:13 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 59178 00:06:55.180 16:01:13 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:55.180 16:01:13 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:55.180 16:01:13 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59178 00:06:55.180 killing process with pid 59178 00:06:55.180 16:01:13 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:55.180 16:01:13 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:55.180 16:01:13 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59178' 00:06:55.181 16:01:13 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 59178 00:06:55.181 16:01:13 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 59178 00:06:55.440 [2024-11-04 16:01:14.013364] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:56.816 00:06:56.816 real 0m6.556s 00:06:56.816 user 0m13.609s 00:06:56.816 sys 0m0.567s 00:06:56.816 ************************************ 00:06:56.816 END TEST event_scheduler 00:06:56.816 ************************************ 00:06:56.816 16:01:15 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.816 16:01:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.816 16:01:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:56.816 16:01:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:56.816 16:01:15 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:56.816 16:01:15 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.816 16:01:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.816 ************************************ 00:06:56.816 START TEST app_repeat 00:06:56.816 ************************************ 00:06:56.816 16:01:15 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59295 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:56.816 Process app_repeat pid: 59295 00:06:56.816 spdk_app_start Round 0 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59295' 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:56.816 16:01:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59295 /var/tmp/spdk-nbd.sock 00:06:56.816 16:01:15 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59295 ']' 00:06:56.816 16:01:15 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.816 16:01:15 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:56.816 16:01:15 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.816 16:01:15 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:56.816 16:01:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.816 [2024-11-04 16:01:15.436336] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:06:56.816 [2024-11-04 16:01:15.436768] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59295 ] 00:06:57.075 [2024-11-04 16:01:15.632435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.075 [2024-11-04 16:01:15.787340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.075 [2024-11-04 16:01:15.787370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.011 16:01:16 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.011 16:01:16 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:58.011 16:01:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.011 Malloc0 00:06:58.011 16:01:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.579 Malloc1 00:06:58.579 16:01:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.579 16:01:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.579 /dev/nbd0 00:06:58.838 16:01:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.838 16:01:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.838 1+0 records in 00:06:58.838 1+0 records out 00:06:58.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251876 s, 16.3 MB/s 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:58.838 16:01:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:58.838 16:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.838 16:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.838 16:01:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.097 /dev/nbd1 00:06:59.097 16:01:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.097 16:01:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.097 1+0 records in 00:06:59.097 1+0 records out 00:06:59.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601255 s, 6.8 MB/s 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:59.097 16:01:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:59.097 16:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.097 16:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.097 16:01:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.097 16:01:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.097 16:01:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.356 16:01:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.356 { 00:06:59.356 "nbd_device": "/dev/nbd0", 00:06:59.356 "bdev_name": "Malloc0" 00:06:59.356 }, 00:06:59.356 { 00:06:59.356 "nbd_device": "/dev/nbd1", 00:06:59.356 "bdev_name": "Malloc1" 00:06:59.356 } 00:06:59.356 ]' 00:06:59.356 16:01:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.356 16:01:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.356 { 00:06:59.356 "nbd_device": "/dev/nbd0", 00:06:59.356 "bdev_name": "Malloc0" 00:06:59.356 }, 00:06:59.356 { 00:06:59.356 "nbd_device": "/dev/nbd1", 00:06:59.356 "bdev_name": "Malloc1" 00:06:59.356 } 00:06:59.356 ]' 00:06:59.356 16:01:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.356 /dev/nbd1' 00:06:59.356 16:01:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.356 16:01:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.356 /dev/nbd1' 00:06:59.356 16:01:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.356 16:01:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.356 16:01:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.356 16:01:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.357 16:01:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.357 16:01:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.357 16:01:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.357 16:01:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.357 16:01:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.357 16:01:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.357 16:01:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.357 256+0 records in 00:06:59.357 256+0 records out 00:06:59.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137806 s, 76.1 MB/s 00:06:59.357 16:01:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.357 16:01:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.357 256+0 records in 00:06:59.357 256+0 records out 00:06:59.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0343654 s, 30.5 MB/s 00:06:59.357 16:01:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.357 16:01:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.357 256+0 records in 00:06:59.357 256+0 records out 00:06:59.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0350917 s, 29.9 MB/s 00:06:59.357 16:01:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.357 16:01:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.357 16:01:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.357 16:01:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.357 16:01:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.357 16:01:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.357 16:01:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.357 16:01:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.357 16:01:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.615 16:01:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.615 16:01:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.615 16:01:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.615 16:01:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.615 16:01:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.615 16:01:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.615 16:01:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.615 16:01:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.615 16:01:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.615 16:01:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.874 16:01:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.874 16:01:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.874 16:01:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.874 16:01:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.874 16:01:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.874 16:01:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.874 16:01:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.874 16:01:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.874 16:01:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.874 16:01:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.133 16:01:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.133 16:01:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.133 16:01:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.133 16:01:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.133 16:01:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.133 16:01:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.133 16:01:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.133 16:01:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.133 16:01:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.133 16:01:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.133 16:01:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.392 16:01:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.392 16:01:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.392 16:01:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.392 16:01:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.392 16:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.392 16:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.392 16:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.392 16:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.392 16:01:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.392 16:01:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.392 16:01:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.392 16:01:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.392 16:01:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.960 16:01:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:02.338 [2024-11-04 16:01:20.647481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.338 [2024-11-04 16:01:20.799494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.338 [2024-11-04 16:01:20.799494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.338 [2024-11-04 16:01:21.036933] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.338 [2024-11-04 16:01:21.037056] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.715 16:01:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:03.715 spdk_app_start Round 1 00:07:03.715 16:01:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:03.715 16:01:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59295 /var/tmp/spdk-nbd.sock 00:07:03.715 16:01:22 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59295 ']' 00:07:03.715 16:01:22 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.715 16:01:22 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.715 16:01:22 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.715 16:01:22 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.715 16:01:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.974 16:01:22 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.974 16:01:22 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:03.974 16:01:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.232 Malloc0 00:07:04.232 16:01:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.836 Malloc1 00:07:04.836 16:01:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.836 /dev/nbd0 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.836 1+0 records in 00:07:04.836 1+0 records out 00:07:04.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188606 s, 21.7 MB/s 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:04.836 16:01:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.836 16:01:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.095 /dev/nbd1 00:07:05.095 16:01:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.095 16:01:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.095 1+0 records in 00:07:05.095 1+0 records out 00:07:05.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456553 s, 9.0 MB/s 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:05.095 16:01:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:05.095 16:01:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.095 16:01:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.095 16:01:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.095 16:01:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.095 16:01:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.662 { 00:07:05.662 "nbd_device": "/dev/nbd0", 00:07:05.662 "bdev_name": "Malloc0" 00:07:05.662 }, 00:07:05.662 { 00:07:05.662 "nbd_device": "/dev/nbd1", 00:07:05.662 "bdev_name": "Malloc1" 00:07:05.662 } 00:07:05.662 ]' 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.662 { 00:07:05.662 "nbd_device": "/dev/nbd0", 00:07:05.662 "bdev_name": "Malloc0" 00:07:05.662 }, 00:07:05.662 { 00:07:05.662 "nbd_device": "/dev/nbd1", 00:07:05.662 "bdev_name": "Malloc1" 00:07:05.662 } 00:07:05.662 ]' 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.662 /dev/nbd1' 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.662 /dev/nbd1' 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.662 256+0 records in 00:07:05.662 256+0 records out 00:07:05.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122563 s, 85.6 MB/s 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.662 256+0 records in 00:07:05.662 256+0 records out 00:07:05.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277628 s, 37.8 MB/s 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.662 256+0 records in 00:07:05.662 256+0 records out 00:07:05.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0375063 s, 28.0 MB/s 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.662 16:01:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.921 16:01:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.921 16:01:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.921 16:01:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.921 16:01:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.921 16:01:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.921 16:01:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.921 16:01:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.921 16:01:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.921 16:01:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.921 16:01:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.179 16:01:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.179 16:01:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.179 16:01:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.179 16:01:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.179 16:01:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.179 16:01:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.179 16:01:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.179 16:01:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.179 16:01:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.179 16:01:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.179 16:01:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.442 16:01:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.442 16:01:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.442 16:01:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.706 16:01:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.706 16:01:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.706 16:01:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.706 16:01:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:06.706 16:01:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.706 16:01:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.706 16:01:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.706 16:01:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.706 16:01:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.706 16:01:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:06.964 16:01:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:08.343 [2024-11-04 16:01:26.895538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.343 [2024-11-04 16:01:27.042983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.343 [2024-11-04 16:01:27.042993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.617 [2024-11-04 16:01:27.281737] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:08.617 [2024-11-04 16:01:27.282048] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.087 16:01:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.087 spdk_app_start Round 2 00:07:10.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.087 16:01:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:10.087 16:01:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59295 /var/tmp/spdk-nbd.sock 00:07:10.087 16:01:28 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59295 ']' 00:07:10.087 16:01:28 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.087 16:01:28 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:10.087 16:01:28 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.087 16:01:28 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:10.087 16:01:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.345 16:01:28 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:10.345 16:01:28 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:10.345 16:01:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.604 Malloc0 00:07:10.604 16:01:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.862 Malloc1 00:07:10.863 16:01:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.863 16:01:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:11.122 /dev/nbd0 00:07:11.122 16:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.122 16:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.122 1+0 records in 00:07:11.122 1+0 records out 00:07:11.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278809 s, 14.7 MB/s 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:11.122 16:01:29 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:11.122 16:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.122 16:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.122 16:01:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:11.381 /dev/nbd1 00:07:11.381 16:01:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.381 16:01:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.381 1+0 records in 00:07:11.381 1+0 records out 00:07:11.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028945 s, 14.2 MB/s 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:11.381 16:01:30 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:11.381 16:01:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.381 16:01:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.381 16:01:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.381 16:01:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.640 16:01:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.640 16:01:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:11.640 { 00:07:11.640 "nbd_device": "/dev/nbd0", 00:07:11.640 "bdev_name": "Malloc0" 00:07:11.640 }, 00:07:11.640 { 00:07:11.640 "nbd_device": "/dev/nbd1", 00:07:11.640 "bdev_name": "Malloc1" 00:07:11.640 } 00:07:11.640 ]' 00:07:11.640 16:01:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:11.640 { 00:07:11.640 "nbd_device": "/dev/nbd0", 00:07:11.640 "bdev_name": "Malloc0" 00:07:11.640 }, 00:07:11.640 { 00:07:11.640 "nbd_device": "/dev/nbd1", 00:07:11.640 "bdev_name": "Malloc1" 00:07:11.640 } 00:07:11.640 ]' 00:07:11.640 16:01:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:11.899 /dev/nbd1' 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:11.899 /dev/nbd1' 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:11.899 256+0 records in 00:07:11.899 256+0 records out 00:07:11.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121484 s, 86.3 MB/s 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:11.899 256+0 records in 00:07:11.899 256+0 records out 00:07:11.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291498 s, 36.0 MB/s 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:11.899 256+0 records in 00:07:11.899 256+0 records out 00:07:11.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308768 s, 34.0 MB/s 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.899 16:01:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.157 16:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.157 16:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.157 16:01:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.157 16:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.157 16:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.157 16:01:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.157 16:01:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.157 16:01:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.157 16:01:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.158 16:01:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.416 16:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.416 16:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.416 16:01:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.416 16:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.416 16:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.416 16:01:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.416 16:01:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.416 16:01:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.416 16:01:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.416 16:01:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.416 16:01:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.674 16:01:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.674 16:01:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.674 16:01:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.674 16:01:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.674 16:01:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.674 16:01:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.674 16:01:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:12.674 16:01:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.674 16:01:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.674 16:01:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:12.674 16:01:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:12.674 16:01:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:12.674 16:01:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.248 16:01:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:14.623 [2024-11-04 16:01:32.993154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.623 [2024-11-04 16:01:33.133150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.623 [2024-11-04 16:01:33.133151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.882 [2024-11-04 16:01:33.360549] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:14.882 [2024-11-04 16:01:33.360677] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:16.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.268 16:01:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59295 /var/tmp/spdk-nbd.sock 00:07:16.268 16:01:34 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59295 ']' 00:07:16.268 16:01:34 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.268 16:01:34 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:16.268 16:01:34 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.268 16:01:34 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:16.268 16:01:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.268 16:01:34 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:16.268 16:01:34 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:16.268 16:01:34 event.app_repeat -- event/event.sh@39 -- # killprocess 59295 00:07:16.268 16:01:34 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 59295 ']' 00:07:16.268 16:01:34 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 59295 00:07:16.268 16:01:34 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:07:16.526 16:01:34 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:16.526 16:01:34 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59295 00:07:16.526 killing process with pid 59295 00:07:16.526 16:01:35 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:16.526 16:01:35 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:16.526 16:01:35 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59295' 00:07:16.526 16:01:35 event.app_repeat -- common/autotest_common.sh@971 -- # kill 59295 00:07:16.526 16:01:35 event.app_repeat -- common/autotest_common.sh@976 -- # wait 59295 00:07:17.460 spdk_app_start is called in Round 0. 00:07:17.460 Shutdown signal received, stop current app iteration 00:07:17.460 Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 reinitialization... 00:07:17.460 spdk_app_start is called in Round 1. 00:07:17.460 Shutdown signal received, stop current app iteration 00:07:17.460 Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 reinitialization... 00:07:17.460 spdk_app_start is called in Round 2. 00:07:17.460 Shutdown signal received, stop current app iteration 00:07:17.460 Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 reinitialization... 00:07:17.460 spdk_app_start is called in Round 3. 00:07:17.460 Shutdown signal received, stop current app iteration 00:07:17.460 16:01:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:17.460 16:01:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:17.460 00:07:17.460 real 0m20.751s 00:07:17.460 user 0m44.254s 00:07:17.460 sys 0m3.642s 00:07:17.460 16:01:36 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.460 ************************************ 00:07:17.460 END TEST app_repeat 00:07:17.460 ************************************ 00:07:17.460 16:01:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:17.460 16:01:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:17.460 16:01:36 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:17.460 16:01:36 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:17.460 16:01:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.460 16:01:36 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.718 ************************************ 00:07:17.718 START TEST cpu_locks 00:07:17.718 ************************************ 00:07:17.718 16:01:36 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:17.718 * Looking for test storage... 00:07:17.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:17.718 16:01:36 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:17.718 16:01:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:17.718 16:01:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:17.718 16:01:36 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.718 16:01:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:17.718 16:01:36 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.718 16:01:36 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:17.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.718 --rc genhtml_branch_coverage=1 00:07:17.718 --rc genhtml_function_coverage=1 00:07:17.718 --rc genhtml_legend=1 00:07:17.718 --rc geninfo_all_blocks=1 00:07:17.718 --rc geninfo_unexecuted_blocks=1 00:07:17.718 00:07:17.718 ' 00:07:17.718 16:01:36 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:17.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.718 --rc genhtml_branch_coverage=1 00:07:17.718 --rc genhtml_function_coverage=1 00:07:17.718 --rc genhtml_legend=1 00:07:17.718 --rc geninfo_all_blocks=1 00:07:17.718 --rc geninfo_unexecuted_blocks=1 00:07:17.718 00:07:17.718 ' 00:07:17.718 16:01:36 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:17.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.718 --rc genhtml_branch_coverage=1 00:07:17.718 --rc genhtml_function_coverage=1 00:07:17.718 --rc genhtml_legend=1 00:07:17.718 --rc geninfo_all_blocks=1 00:07:17.718 --rc geninfo_unexecuted_blocks=1 00:07:17.718 00:07:17.718 ' 00:07:17.718 16:01:36 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:17.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.718 --rc genhtml_branch_coverage=1 00:07:17.718 --rc genhtml_function_coverage=1 00:07:17.718 --rc genhtml_legend=1 00:07:17.718 --rc geninfo_all_blocks=1 00:07:17.718 --rc geninfo_unexecuted_blocks=1 00:07:17.718 00:07:17.718 ' 00:07:17.718 16:01:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:17.718 16:01:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:17.718 16:01:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:17.718 16:01:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:17.718 16:01:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:17.718 16:01:36 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.718 16:01:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.718 ************************************ 00:07:17.718 START TEST default_locks 00:07:17.718 ************************************ 00:07:17.718 16:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:07:17.718 16:01:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59761 00:07:17.718 16:01:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.718 16:01:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59761 00:07:17.718 16:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59761 ']' 00:07:17.718 16:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.718 16:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.718 16:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.718 16:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.718 16:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.976 [2024-11-04 16:01:36.543121] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:07:17.976 [2024-11-04 16:01:36.543255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59761 ] 00:07:18.233 [2024-11-04 16:01:36.729839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.233 [2024-11-04 16:01:36.855802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.170 16:01:37 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:19.170 16:01:37 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:07:19.170 16:01:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59761 00:07:19.170 16:01:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.170 16:01:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59761 00:07:19.736 16:01:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59761 00:07:19.736 16:01:38 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 59761 ']' 00:07:19.736 16:01:38 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 59761 00:07:19.736 16:01:38 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:07:19.736 16:01:38 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:19.736 16:01:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59761 00:07:19.736 16:01:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:19.736 16:01:38 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:19.736 killing process with pid 59761 00:07:19.736 16:01:38 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59761' 00:07:19.736 16:01:38 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 59761 00:07:19.736 16:01:38 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 59761 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59761 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59761 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59761 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59761 ']' 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:22.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.268 ERROR: process (pid: 59761) is no longer running 00:07:22.268 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59761) - No such process 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:22.268 00:07:22.268 real 0m4.247s 00:07:22.268 user 0m4.209s 00:07:22.268 sys 0m0.728s 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.268 16:01:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.268 ************************************ 00:07:22.268 END TEST default_locks 00:07:22.268 ************************************ 00:07:22.268 16:01:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:22.268 16:01:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:22.268 16:01:40 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.268 16:01:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.268 ************************************ 00:07:22.268 START TEST default_locks_via_rpc 00:07:22.268 ************************************ 00:07:22.268 16:01:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:07:22.268 16:01:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59836 00:07:22.268 16:01:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59836 00:07:22.268 16:01:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.268 16:01:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59836 ']' 00:07:22.268 16:01:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.268 16:01:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:22.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.268 16:01:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.269 16:01:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:22.269 16:01:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.269 [2024-11-04 16:01:40.865776] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:07:22.269 [2024-11-04 16:01:40.866359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59836 ] 00:07:22.527 [2024-11-04 16:01:41.049071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.527 [2024-11-04 16:01:41.166022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59836 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59836 00:07:23.478 16:01:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.047 16:01:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59836 00:07:24.047 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59836 ']' 00:07:24.047 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59836 00:07:24.047 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:07:24.047 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:24.047 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59836 00:07:24.047 killing process with pid 59836 00:07:24.047 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:24.047 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:24.047 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59836' 00:07:24.047 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59836 00:07:24.047 16:01:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59836 00:07:26.579 ************************************ 00:07:26.579 END TEST default_locks_via_rpc 00:07:26.579 ************************************ 00:07:26.579 00:07:26.579 real 0m4.411s 00:07:26.579 user 0m4.364s 00:07:26.579 sys 0m0.725s 00:07:26.579 16:01:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.579 16:01:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.579 16:01:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:26.579 16:01:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:26.579 16:01:45 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.579 16:01:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.579 ************************************ 00:07:26.579 START TEST non_locking_app_on_locked_coremask 00:07:26.579 ************************************ 00:07:26.579 16:01:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:07:26.579 16:01:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59918 00:07:26.579 16:01:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.579 16:01:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59918 /var/tmp/spdk.sock 00:07:26.579 16:01:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59918 ']' 00:07:26.579 16:01:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.579 16:01:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:26.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.579 16:01:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.579 16:01:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:26.579 16:01:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.837 [2024-11-04 16:01:45.371185] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:07:26.837 [2024-11-04 16:01:45.372288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59918 ] 00:07:26.837 [2024-11-04 16:01:45.558072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.095 [2024-11-04 16:01:45.714388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.470 16:01:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:28.470 16:01:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:28.470 16:01:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59940 00:07:28.470 16:01:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59940 /var/tmp/spdk2.sock 00:07:28.470 16:01:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:28.470 16:01:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59940 ']' 00:07:28.470 16:01:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.470 16:01:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:28.470 16:01:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.470 16:01:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:28.470 16:01:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.470 [2024-11-04 16:01:46.929100] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:07:28.470 [2024-11-04 16:01:46.929847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59940 ] 00:07:28.470 [2024-11-04 16:01:47.133072] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.470 [2024-11-04 16:01:47.133178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.037 [2024-11-04 16:01:47.456970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.571 16:01:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:31.571 16:01:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:31.571 16:01:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59918 00:07:31.571 16:01:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59918 00:07:31.571 16:01:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:31.828 16:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59918 00:07:31.828 16:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59918 ']' 00:07:31.828 16:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59918 00:07:31.828 16:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:31.828 16:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:31.828 16:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59918 00:07:32.086 16:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:32.086 16:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:32.086 killing process with pid 59918 00:07:32.086 16:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59918' 00:07:32.086 16:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59918 00:07:32.086 16:01:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59918 00:07:37.353 16:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59940 00:07:37.353 16:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59940 ']' 00:07:37.353 16:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59940 00:07:37.353 16:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:37.353 16:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:37.353 16:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59940 00:07:37.353 killing process with pid 59940 00:07:37.353 16:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:37.353 16:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:37.353 16:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59940' 00:07:37.353 16:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59940 00:07:37.353 16:01:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59940 00:07:40.640 ************************************ 00:07:40.640 END TEST non_locking_app_on_locked_coremask 00:07:40.640 ************************************ 00:07:40.640 00:07:40.640 real 0m13.485s 00:07:40.640 user 0m13.524s 00:07:40.640 sys 0m1.889s 00:07:40.640 16:01:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:40.640 16:01:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.640 16:01:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:40.640 16:01:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:40.640 16:01:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:40.640 16:01:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.640 ************************************ 00:07:40.640 START TEST locking_app_on_unlocked_coremask 00:07:40.640 ************************************ 00:07:40.640 16:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:07:40.640 16:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60099 00:07:40.640 16:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60099 /var/tmp/spdk.sock 00:07:40.640 16:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:40.640 16:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60099 ']' 00:07:40.640 16:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.640 16:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:40.640 16:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.640 16:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:40.640 16:01:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.640 [2024-11-04 16:01:58.930646] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:07:40.640 [2024-11-04 16:01:58.931038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60099 ] 00:07:40.640 [2024-11-04 16:01:59.119437] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:40.640 [2024-11-04 16:01:59.119640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.640 [2024-11-04 16:01:59.256582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.018 16:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:42.018 16:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:42.018 16:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:42.018 16:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60126 00:07:42.018 16:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60126 /var/tmp/spdk2.sock 00:07:42.018 16:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60126 ']' 00:07:42.018 16:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.018 16:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.018 16:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.018 16:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.018 16:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.018 [2024-11-04 16:02:00.445377] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:07:42.018 [2024-11-04 16:02:00.445820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60126 ] 00:07:42.018 [2024-11-04 16:02:00.643562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.277 [2024-11-04 16:02:00.936885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.812 16:02:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:44.812 16:02:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:44.812 16:02:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60126 00:07:44.812 16:02:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60126 00:07:44.812 16:02:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:45.380 16:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60099 00:07:45.380 16:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60099 ']' 00:07:45.380 16:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60099 00:07:45.380 16:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:45.380 16:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:45.380 16:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60099 00:07:45.380 16:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:45.380 16:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:45.380 16:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60099' 00:07:45.380 killing process with pid 60099 00:07:45.380 16:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60099 00:07:45.380 16:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60099 00:07:51.945 16:02:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60126 00:07:51.945 16:02:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60126 ']' 00:07:51.945 16:02:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60126 00:07:51.945 16:02:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:51.945 16:02:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:51.945 16:02:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60126 00:07:51.945 16:02:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:51.945 killing process with pid 60126 00:07:51.945 16:02:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:51.945 16:02:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60126' 00:07:51.945 16:02:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60126 00:07:51.945 16:02:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60126 00:07:53.323 00:07:53.323 real 0m13.223s 00:07:53.323 user 0m13.193s 00:07:53.323 sys 0m1.834s 00:07:53.323 ************************************ 00:07:53.323 END TEST locking_app_on_unlocked_coremask 00:07:53.323 ************************************ 00:07:53.323 16:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:53.323 16:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.582 16:02:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:53.582 16:02:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:53.582 16:02:12 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:53.582 16:02:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:53.582 ************************************ 00:07:53.582 START TEST locking_app_on_locked_coremask 00:07:53.582 ************************************ 00:07:53.582 16:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:07:53.582 16:02:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:53.582 16:02:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60288 00:07:53.582 16:02:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60288 /var/tmp/spdk.sock 00:07:53.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.582 16:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60288 ']' 00:07:53.582 16:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.582 16:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:53.582 16:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.582 16:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:53.582 16:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.582 [2024-11-04 16:02:12.207147] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:07:53.582 [2024-11-04 16:02:12.207648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60288 ] 00:07:53.841 [2024-11-04 16:02:12.408619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.098 [2024-11-04 16:02:12.569465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60310 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60310 /var/tmp/spdk2.sock 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60310 /var/tmp/spdk2.sock 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60310 /var/tmp/spdk2.sock 00:07:55.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60310 ']' 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.031 16:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.289 [2024-11-04 16:02:13.813460] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:07:55.289 [2024-11-04 16:02:13.813601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60310 ] 00:07:55.289 [2024-11-04 16:02:14.009711] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60288 has claimed it. 00:07:55.289 [2024-11-04 16:02:14.009819] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:55.915 ERROR: process (pid: 60310) is no longer running 00:07:55.915 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60310) - No such process 00:07:55.915 16:02:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.915 16:02:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:55.915 16:02:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:55.915 16:02:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.915 16:02:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:55.915 16:02:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.915 16:02:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60288 00:07:55.915 16:02:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:55.915 16:02:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60288 00:07:56.484 16:02:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60288 00:07:56.484 16:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60288 ']' 00:07:56.484 16:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60288 00:07:56.484 16:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:56.484 16:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:56.484 16:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60288 00:07:56.484 16:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:56.484 16:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:56.484 killing process with pid 60288 00:07:56.484 16:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60288' 00:07:56.484 16:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60288 00:07:56.484 16:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60288 00:07:59.018 00:07:59.018 real 0m5.436s 00:07:59.018 user 0m5.672s 00:07:59.018 sys 0m1.134s 00:07:59.018 16:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.018 16:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.018 ************************************ 00:07:59.018 END TEST locking_app_on_locked_coremask 00:07:59.018 ************************************ 00:07:59.018 16:02:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:59.018 16:02:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:59.018 16:02:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:59.018 16:02:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.018 ************************************ 00:07:59.018 START TEST locking_overlapped_coremask 00:07:59.018 ************************************ 00:07:59.018 16:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:59.018 16:02:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60385 00:07:59.018 16:02:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:59.018 16:02:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60385 /var/tmp/spdk.sock 00:07:59.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.018 16:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60385 ']' 00:07:59.018 16:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.018 16:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.018 16:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.018 16:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.018 16:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.018 [2024-11-04 16:02:17.714893] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:07:59.018 [2024-11-04 16:02:17.715968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60385 ] 00:07:59.277 [2024-11-04 16:02:17.891691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.536 [2024-11-04 16:02:18.028671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.536 [2024-11-04 16:02:18.028829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.536 [2024-11-04 16:02:18.028861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60403 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60403 /var/tmp/spdk2.sock 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60403 /var/tmp/spdk2.sock 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:00.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60403 /var/tmp/spdk2.sock 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60403 ']' 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:00.491 16:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.491 [2024-11-04 16:02:19.054439] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:08:00.491 [2024-11-04 16:02:19.054558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60403 ] 00:08:00.747 [2024-11-04 16:02:19.241023] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60385 has claimed it. 00:08:00.747 [2024-11-04 16:02:19.241103] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:01.006 ERROR: process (pid: 60403) is no longer running 00:08:01.006 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60403) - No such process 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60385 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 60385 ']' 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 60385 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:01.006 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60385 00:08:01.265 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:01.265 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:01.265 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60385' 00:08:01.265 killing process with pid 60385 00:08:01.265 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 60385 00:08:01.265 16:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 60385 00:08:03.799 00:08:03.799 real 0m4.570s 00:08:03.799 user 0m12.312s 00:08:03.799 sys 0m0.646s 00:08:03.799 16:02:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.799 16:02:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.799 ************************************ 00:08:03.799 END TEST locking_overlapped_coremask 00:08:03.799 ************************************ 00:08:03.799 16:02:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:03.799 16:02:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:03.799 16:02:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.799 16:02:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.799 ************************************ 00:08:03.799 START TEST locking_overlapped_coremask_via_rpc 00:08:03.799 ************************************ 00:08:03.799 16:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:08:03.799 16:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60467 00:08:03.799 16:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60467 /var/tmp/spdk.sock 00:08:03.799 16:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:03.799 16:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60467 ']' 00:08:03.799 16:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.799 16:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:03.799 16:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.799 16:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:03.800 16:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.800 [2024-11-04 16:02:22.345836] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:08:03.800 [2024-11-04 16:02:22.347601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60467 ] 00:08:04.059 [2024-11-04 16:02:22.523685] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:04.059 [2024-11-04 16:02:22.523764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:04.059 [2024-11-04 16:02:22.648554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.059 [2024-11-04 16:02:22.648699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.059 [2024-11-04 16:02:22.648729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:04.996 16:02:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:04.996 16:02:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:04.996 16:02:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60485 00:08:04.996 16:02:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60485 /var/tmp/spdk2.sock 00:08:04.996 16:02:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60485 ']' 00:08:04.996 16:02:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:04.996 16:02:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:04.996 16:02:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:04.996 16:02:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:04.996 16:02:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.996 16:02:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:04.996 [2024-11-04 16:02:23.642481] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:08:04.996 [2024-11-04 16:02:23.642592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60485 ] 00:08:05.255 [2024-11-04 16:02:23.830719] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:05.255 [2024-11-04 16:02:23.830792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.515 [2024-11-04 16:02:24.073853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.515 [2024-11-04 16:02:24.076942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.515 [2024-11-04 16:02:24.076976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.051 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.052 [2024-11-04 16:02:26.213929] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60467 has claimed it. 00:08:08.052 request: 00:08:08.052 { 00:08:08.052 "method": "framework_enable_cpumask_locks", 00:08:08.052 "req_id": 1 00:08:08.052 } 00:08:08.052 Got JSON-RPC error response 00:08:08.052 response: 00:08:08.052 { 00:08:08.052 "code": -32603, 00:08:08.052 "message": "Failed to claim CPU core: 2" 00:08:08.052 } 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60467 /var/tmp/spdk.sock 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60467 ']' 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:08.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60485 /var/tmp/spdk2.sock 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60485 ']' 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:08.052 00:08:08.052 real 0m4.418s 00:08:08.052 user 0m1.267s 00:08:08.052 sys 0m0.258s 00:08:08.052 ************************************ 00:08:08.052 END TEST locking_overlapped_coremask_via_rpc 00:08:08.052 ************************************ 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.052 16:02:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.052 16:02:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:08.052 16:02:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60467 ]] 00:08:08.052 16:02:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60467 00:08:08.052 16:02:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60467 ']' 00:08:08.052 16:02:26 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60467 00:08:08.052 16:02:26 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:08.052 16:02:26 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:08.052 16:02:26 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60467 00:08:08.052 killing process with pid 60467 00:08:08.052 16:02:26 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:08.052 16:02:26 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:08.052 16:02:26 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60467' 00:08:08.052 16:02:26 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60467 00:08:08.052 16:02:26 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60467 00:08:10.586 16:02:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60485 ]] 00:08:10.586 16:02:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60485 00:08:10.586 16:02:29 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60485 ']' 00:08:10.586 16:02:29 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60485 00:08:10.586 16:02:29 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:10.586 16:02:29 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:10.586 16:02:29 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60485 00:08:10.586 killing process with pid 60485 00:08:10.586 16:02:29 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:10.586 16:02:29 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:10.586 16:02:29 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60485' 00:08:10.586 16:02:29 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60485 00:08:10.586 16:02:29 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60485 00:08:13.120 16:02:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:13.120 16:02:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:13.120 16:02:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60467 ]] 00:08:13.120 16:02:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60467 00:08:13.120 16:02:31 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60467 ']' 00:08:13.120 16:02:31 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60467 00:08:13.120 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60467) - No such process 00:08:13.120 Process with pid 60467 is not found 00:08:13.120 16:02:31 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60467 is not found' 00:08:13.120 16:02:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60485 ]] 00:08:13.120 Process with pid 60485 is not found 00:08:13.120 16:02:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60485 00:08:13.120 16:02:31 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60485 ']' 00:08:13.120 16:02:31 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60485 00:08:13.120 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60485) - No such process 00:08:13.120 16:02:31 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60485 is not found' 00:08:13.120 16:02:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:13.120 00:08:13.120 real 0m55.466s 00:08:13.120 user 1m30.658s 00:08:13.120 sys 0m8.534s 00:08:13.120 ************************************ 00:08:13.120 END TEST cpu_locks 00:08:13.120 ************************************ 00:08:13.120 16:02:31 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.120 16:02:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:13.120 ************************************ 00:08:13.120 END TEST event 00:08:13.120 ************************************ 00:08:13.120 00:08:13.120 real 1m28.403s 00:08:13.120 user 2m35.975s 00:08:13.120 sys 0m13.583s 00:08:13.120 16:02:31 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.120 16:02:31 event -- common/autotest_common.sh@10 -- # set +x 00:08:13.120 16:02:31 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:13.120 16:02:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:13.120 16:02:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.120 16:02:31 -- common/autotest_common.sh@10 -- # set +x 00:08:13.120 ************************************ 00:08:13.120 START TEST thread 00:08:13.120 ************************************ 00:08:13.120 16:02:31 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:13.379 * Looking for test storage... 00:08:13.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:13.380 16:02:31 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:13.380 16:02:31 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:13.380 16:02:31 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:13.380 16:02:31 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:13.380 16:02:31 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.380 16:02:31 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.380 16:02:31 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.380 16:02:31 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.380 16:02:31 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.380 16:02:31 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.380 16:02:31 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.380 16:02:31 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.380 16:02:31 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.380 16:02:31 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.380 16:02:31 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.380 16:02:31 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:13.380 16:02:31 thread -- scripts/common.sh@345 -- # : 1 00:08:13.380 16:02:31 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.380 16:02:31 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.380 16:02:31 thread -- scripts/common.sh@365 -- # decimal 1 00:08:13.380 16:02:32 thread -- scripts/common.sh@353 -- # local d=1 00:08:13.380 16:02:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.380 16:02:32 thread -- scripts/common.sh@355 -- # echo 1 00:08:13.380 16:02:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.380 16:02:32 thread -- scripts/common.sh@366 -- # decimal 2 00:08:13.380 16:02:32 thread -- scripts/common.sh@353 -- # local d=2 00:08:13.380 16:02:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.380 16:02:32 thread -- scripts/common.sh@355 -- # echo 2 00:08:13.380 16:02:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.380 16:02:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.380 16:02:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.380 16:02:32 thread -- scripts/common.sh@368 -- # return 0 00:08:13.380 16:02:32 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.380 16:02:32 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:13.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.380 --rc genhtml_branch_coverage=1 00:08:13.380 --rc genhtml_function_coverage=1 00:08:13.380 --rc genhtml_legend=1 00:08:13.380 --rc geninfo_all_blocks=1 00:08:13.380 --rc geninfo_unexecuted_blocks=1 00:08:13.380 00:08:13.380 ' 00:08:13.380 16:02:32 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:13.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.380 --rc genhtml_branch_coverage=1 00:08:13.380 --rc genhtml_function_coverage=1 00:08:13.380 --rc genhtml_legend=1 00:08:13.380 --rc geninfo_all_blocks=1 00:08:13.380 --rc geninfo_unexecuted_blocks=1 00:08:13.380 00:08:13.380 ' 00:08:13.380 16:02:32 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:13.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.380 --rc genhtml_branch_coverage=1 00:08:13.380 --rc genhtml_function_coverage=1 00:08:13.380 --rc genhtml_legend=1 00:08:13.380 --rc geninfo_all_blocks=1 00:08:13.380 --rc geninfo_unexecuted_blocks=1 00:08:13.380 00:08:13.380 ' 00:08:13.380 16:02:32 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:13.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.380 --rc genhtml_branch_coverage=1 00:08:13.380 --rc genhtml_function_coverage=1 00:08:13.380 --rc genhtml_legend=1 00:08:13.380 --rc geninfo_all_blocks=1 00:08:13.380 --rc geninfo_unexecuted_blocks=1 00:08:13.380 00:08:13.380 ' 00:08:13.380 16:02:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:13.380 16:02:32 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:13.380 16:02:32 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.380 16:02:32 thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.380 ************************************ 00:08:13.380 START TEST thread_poller_perf 00:08:13.380 ************************************ 00:08:13.380 16:02:32 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:13.380 [2024-11-04 16:02:32.082236] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:08:13.380 [2024-11-04 16:02:32.082502] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60686 ] 00:08:13.639 [2024-11-04 16:02:32.265411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.899 [2024-11-04 16:02:32.381065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.899 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:15.277 [2024-11-04T16:02:33.999Z] ====================================== 00:08:15.277 [2024-11-04T16:02:33.999Z] busy:2500629352 (cyc) 00:08:15.277 [2024-11-04T16:02:33.999Z] total_run_count: 386000 00:08:15.277 [2024-11-04T16:02:33.999Z] tsc_hz: 2490000000 (cyc) 00:08:15.277 [2024-11-04T16:02:33.999Z] ====================================== 00:08:15.277 [2024-11-04T16:02:33.999Z] poller_cost: 6478 (cyc), 2601 (nsec) 00:08:15.277 00:08:15.277 real 0m1.587s 00:08:15.277 ************************************ 00:08:15.277 END TEST thread_poller_perf 00:08:15.277 ************************************ 00:08:15.277 user 0m1.357s 00:08:15.277 sys 0m0.121s 00:08:15.277 16:02:33 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.277 16:02:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:15.277 16:02:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:15.277 16:02:33 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:15.277 16:02:33 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.277 16:02:33 thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.277 ************************************ 00:08:15.277 START TEST thread_poller_perf 00:08:15.277 ************************************ 00:08:15.277 16:02:33 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:15.277 [2024-11-04 16:02:33.739994] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:08:15.277 [2024-11-04 16:02:33.740101] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60722 ] 00:08:15.277 [2024-11-04 16:02:33.922280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.572 [2024-11-04 16:02:34.036388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.572 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:16.954 [2024-11-04T16:02:35.676Z] ====================================== 00:08:16.954 [2024-11-04T16:02:35.676Z] busy:2494302352 (cyc) 00:08:16.954 [2024-11-04T16:02:35.676Z] total_run_count: 5077000 00:08:16.954 [2024-11-04T16:02:35.676Z] tsc_hz: 2490000000 (cyc) 00:08:16.954 [2024-11-04T16:02:35.676Z] ====================================== 00:08:16.954 [2024-11-04T16:02:35.676Z] poller_cost: 491 (cyc), 197 (nsec) 00:08:16.954 00:08:16.955 real 0m1.583s 00:08:16.955 user 0m1.356s 00:08:16.955 sys 0m0.119s 00:08:16.955 16:02:35 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.955 16:02:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:16.955 ************************************ 00:08:16.955 END TEST thread_poller_perf 00:08:16.955 ************************************ 00:08:16.955 16:02:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:16.955 ************************************ 00:08:16.955 END TEST thread 00:08:16.955 ************************************ 00:08:16.955 00:08:16.955 real 0m3.557s 00:08:16.955 user 0m2.875s 00:08:16.955 sys 0m0.470s 00:08:16.955 16:02:35 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.955 16:02:35 thread -- common/autotest_common.sh@10 -- # set +x 00:08:16.955 16:02:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:16.955 16:02:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:16.955 16:02:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:16.955 16:02:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:16.955 16:02:35 -- common/autotest_common.sh@10 -- # set +x 00:08:16.955 ************************************ 00:08:16.955 START TEST app_cmdline 00:08:16.955 ************************************ 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:16.955 * Looking for test storage... 00:08:16.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.955 16:02:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:16.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.955 --rc genhtml_branch_coverage=1 00:08:16.955 --rc genhtml_function_coverage=1 00:08:16.955 --rc genhtml_legend=1 00:08:16.955 --rc geninfo_all_blocks=1 00:08:16.955 --rc geninfo_unexecuted_blocks=1 00:08:16.955 00:08:16.955 ' 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:16.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.955 --rc genhtml_branch_coverage=1 00:08:16.955 --rc genhtml_function_coverage=1 00:08:16.955 --rc genhtml_legend=1 00:08:16.955 --rc geninfo_all_blocks=1 00:08:16.955 --rc geninfo_unexecuted_blocks=1 00:08:16.955 00:08:16.955 ' 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:16.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.955 --rc genhtml_branch_coverage=1 00:08:16.955 --rc genhtml_function_coverage=1 00:08:16.955 --rc genhtml_legend=1 00:08:16.955 --rc geninfo_all_blocks=1 00:08:16.955 --rc geninfo_unexecuted_blocks=1 00:08:16.955 00:08:16.955 ' 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:16.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.955 --rc genhtml_branch_coverage=1 00:08:16.955 --rc genhtml_function_coverage=1 00:08:16.955 --rc genhtml_legend=1 00:08:16.955 --rc geninfo_all_blocks=1 00:08:16.955 --rc geninfo_unexecuted_blocks=1 00:08:16.955 00:08:16.955 ' 00:08:16.955 16:02:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:16.955 16:02:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60811 00:08:16.955 16:02:35 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:16.955 16:02:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60811 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 60811 ']' 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:16.955 16:02:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:17.215 [2024-11-04 16:02:35.734788] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:08:17.215 [2024-11-04 16:02:35.735543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60811 ] 00:08:17.215 [2024-11-04 16:02:35.915967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.473 [2024-11-04 16:02:36.037799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.409 16:02:36 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:18.409 16:02:36 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:08:18.409 16:02:36 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:18.668 { 00:08:18.668 "version": "SPDK v25.01-pre git sha1 61de1ff17", 00:08:18.668 "fields": { 00:08:18.668 "major": 25, 00:08:18.668 "minor": 1, 00:08:18.668 "patch": 0, 00:08:18.668 "suffix": "-pre", 00:08:18.668 "commit": "61de1ff17" 00:08:18.668 } 00:08:18.668 } 00:08:18.668 16:02:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:18.668 16:02:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:18.668 16:02:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:18.668 16:02:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:18.668 16:02:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:18.668 16:02:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:18.668 16:02:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.668 16:02:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:18.668 16:02:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:18.668 16:02:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:18.668 16:02:37 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.927 request: 00:08:18.927 { 00:08:18.927 "method": "env_dpdk_get_mem_stats", 00:08:18.927 "req_id": 1 00:08:18.927 } 00:08:18.927 Got JSON-RPC error response 00:08:18.927 response: 00:08:18.927 { 00:08:18.927 "code": -32601, 00:08:18.927 "message": "Method not found" 00:08:18.927 } 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.927 16:02:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60811 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 60811 ']' 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 60811 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60811 00:08:18.927 killing process with pid 60811 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60811' 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@971 -- # kill 60811 00:08:18.927 16:02:37 app_cmdline -- common/autotest_common.sh@976 -- # wait 60811 00:08:21.457 ************************************ 00:08:21.457 END TEST app_cmdline 00:08:21.457 ************************************ 00:08:21.457 00:08:21.457 real 0m4.720s 00:08:21.457 user 0m4.957s 00:08:21.457 sys 0m0.672s 00:08:21.457 16:02:40 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:21.457 16:02:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:21.716 16:02:40 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:21.716 16:02:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:21.716 16:02:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:21.716 16:02:40 -- common/autotest_common.sh@10 -- # set +x 00:08:21.716 ************************************ 00:08:21.716 START TEST version 00:08:21.716 ************************************ 00:08:21.716 16:02:40 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:21.716 * Looking for test storage... 00:08:21.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:21.716 16:02:40 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:21.716 16:02:40 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:21.716 16:02:40 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:21.716 16:02:40 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:21.716 16:02:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.716 16:02:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.716 16:02:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.716 16:02:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.716 16:02:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.716 16:02:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.716 16:02:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.716 16:02:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.716 16:02:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.716 16:02:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.716 16:02:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.716 16:02:40 version -- scripts/common.sh@344 -- # case "$op" in 00:08:21.716 16:02:40 version -- scripts/common.sh@345 -- # : 1 00:08:21.716 16:02:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.716 16:02:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.716 16:02:40 version -- scripts/common.sh@365 -- # decimal 1 00:08:21.716 16:02:40 version -- scripts/common.sh@353 -- # local d=1 00:08:21.716 16:02:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.716 16:02:40 version -- scripts/common.sh@355 -- # echo 1 00:08:21.716 16:02:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.716 16:02:40 version -- scripts/common.sh@366 -- # decimal 2 00:08:21.716 16:02:40 version -- scripts/common.sh@353 -- # local d=2 00:08:21.716 16:02:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.716 16:02:40 version -- scripts/common.sh@355 -- # echo 2 00:08:21.716 16:02:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.716 16:02:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.716 16:02:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.716 16:02:40 version -- scripts/common.sh@368 -- # return 0 00:08:21.716 16:02:40 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.716 16:02:40 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:21.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.716 --rc genhtml_branch_coverage=1 00:08:21.716 --rc genhtml_function_coverage=1 00:08:21.716 --rc genhtml_legend=1 00:08:21.716 --rc geninfo_all_blocks=1 00:08:21.716 --rc geninfo_unexecuted_blocks=1 00:08:21.716 00:08:21.716 ' 00:08:21.716 16:02:40 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:21.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.716 --rc genhtml_branch_coverage=1 00:08:21.716 --rc genhtml_function_coverage=1 00:08:21.716 --rc genhtml_legend=1 00:08:21.716 --rc geninfo_all_blocks=1 00:08:21.716 --rc geninfo_unexecuted_blocks=1 00:08:21.716 00:08:21.716 ' 00:08:21.716 16:02:40 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:21.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.716 --rc genhtml_branch_coverage=1 00:08:21.716 --rc genhtml_function_coverage=1 00:08:21.716 --rc genhtml_legend=1 00:08:21.716 --rc geninfo_all_blocks=1 00:08:21.716 --rc geninfo_unexecuted_blocks=1 00:08:21.716 00:08:21.716 ' 00:08:21.716 16:02:40 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:21.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.716 --rc genhtml_branch_coverage=1 00:08:21.716 --rc genhtml_function_coverage=1 00:08:21.716 --rc genhtml_legend=1 00:08:21.716 --rc geninfo_all_blocks=1 00:08:21.716 --rc geninfo_unexecuted_blocks=1 00:08:21.716 00:08:21.716 ' 00:08:21.716 16:02:40 version -- app/version.sh@17 -- # get_header_version major 00:08:21.716 16:02:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.717 16:02:40 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.717 16:02:40 version -- app/version.sh@14 -- # cut -f2 00:08:21.717 16:02:40 version -- app/version.sh@17 -- # major=25 00:08:21.975 16:02:40 version -- app/version.sh@18 -- # get_header_version minor 00:08:21.975 16:02:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.975 16:02:40 version -- app/version.sh@14 -- # cut -f2 00:08:21.975 16:02:40 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.975 16:02:40 version -- app/version.sh@18 -- # minor=1 00:08:21.975 16:02:40 version -- app/version.sh@19 -- # get_header_version patch 00:08:21.975 16:02:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.975 16:02:40 version -- app/version.sh@14 -- # cut -f2 00:08:21.975 16:02:40 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.975 16:02:40 version -- app/version.sh@19 -- # patch=0 00:08:21.975 16:02:40 version -- app/version.sh@20 -- # get_header_version suffix 00:08:21.975 16:02:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:21.975 16:02:40 version -- app/version.sh@14 -- # cut -f2 00:08:21.975 16:02:40 version -- app/version.sh@14 -- # tr -d '"' 00:08:21.975 16:02:40 version -- app/version.sh@20 -- # suffix=-pre 00:08:21.975 16:02:40 version -- app/version.sh@22 -- # version=25.1 00:08:21.975 16:02:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:21.975 16:02:40 version -- app/version.sh@28 -- # version=25.1rc0 00:08:21.975 16:02:40 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:21.975 16:02:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:21.975 16:02:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:21.975 16:02:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:21.975 00:08:21.975 real 0m0.327s 00:08:21.975 user 0m0.213s 00:08:21.975 sys 0m0.156s 00:08:21.975 16:02:40 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:21.975 16:02:40 version -- common/autotest_common.sh@10 -- # set +x 00:08:21.975 ************************************ 00:08:21.975 END TEST version 00:08:21.975 ************************************ 00:08:21.975 16:02:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:21.975 16:02:40 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:21.975 16:02:40 -- spdk/autotest.sh@194 -- # uname -s 00:08:21.975 16:02:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:21.975 16:02:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:21.975 16:02:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:21.975 16:02:40 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:08:21.975 16:02:40 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:21.975 16:02:40 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:21.975 16:02:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:21.975 16:02:40 -- common/autotest_common.sh@10 -- # set +x 00:08:21.975 ************************************ 00:08:21.975 START TEST blockdev_nvme 00:08:21.975 ************************************ 00:08:21.975 16:02:40 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:21.975 * Looking for test storage... 00:08:21.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:21.975 16:02:40 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:22.234 16:02:40 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:08:22.234 16:02:40 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:22.234 16:02:40 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.234 16:02:40 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:08:22.234 16:02:40 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.234 16:02:40 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:22.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.234 --rc genhtml_branch_coverage=1 00:08:22.234 --rc genhtml_function_coverage=1 00:08:22.234 --rc genhtml_legend=1 00:08:22.234 --rc geninfo_all_blocks=1 00:08:22.234 --rc geninfo_unexecuted_blocks=1 00:08:22.234 00:08:22.234 ' 00:08:22.234 16:02:40 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:22.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.234 --rc genhtml_branch_coverage=1 00:08:22.234 --rc genhtml_function_coverage=1 00:08:22.234 --rc genhtml_legend=1 00:08:22.234 --rc geninfo_all_blocks=1 00:08:22.234 --rc geninfo_unexecuted_blocks=1 00:08:22.234 00:08:22.234 ' 00:08:22.234 16:02:40 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:22.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.234 --rc genhtml_branch_coverage=1 00:08:22.234 --rc genhtml_function_coverage=1 00:08:22.234 --rc genhtml_legend=1 00:08:22.234 --rc geninfo_all_blocks=1 00:08:22.234 --rc geninfo_unexecuted_blocks=1 00:08:22.234 00:08:22.234 ' 00:08:22.234 16:02:40 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:22.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.234 --rc genhtml_branch_coverage=1 00:08:22.234 --rc genhtml_function_coverage=1 00:08:22.234 --rc genhtml_legend=1 00:08:22.234 --rc geninfo_all_blocks=1 00:08:22.234 --rc geninfo_unexecuted_blocks=1 00:08:22.234 00:08:22.234 ' 00:08:22.234 16:02:40 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:22.234 16:02:40 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61005 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:22.235 16:02:40 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61005 00:08:22.235 16:02:40 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 61005 ']' 00:08:22.235 16:02:40 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.235 16:02:40 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:22.235 16:02:40 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.235 16:02:40 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:22.235 16:02:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.235 [2024-11-04 16:02:40.940052] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:08:22.235 [2024-11-04 16:02:40.940385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61005 ] 00:08:22.494 [2024-11-04 16:02:41.126486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.753 [2024-11-04 16:02:41.258294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.692 16:02:42 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:23.692 16:02:42 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:08:23.692 16:02:42 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:23.692 16:02:42 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:08:23.692 16:02:42 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:23.692 16:02:42 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:23.692 16:02:42 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:23.692 16:02:42 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:23.692 16:02:42 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.692 16:02:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.951 16:02:42 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.951 16:02:42 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:23.951 16:02:42 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.951 16:02:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.951 16:02:42 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.951 16:02:42 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:08:23.951 16:02:42 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:23.951 16:02:42 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.952 16:02:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.952 16:02:42 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.952 16:02:42 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:23.952 16:02:42 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.952 16:02:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.952 16:02:42 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.952 16:02:42 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:23.952 16:02:42 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.952 16:02:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.211 16:02:42 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.211 16:02:42 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:24.211 16:02:42 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:24.211 16:02:42 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:24.211 16:02:42 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.211 16:02:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.211 16:02:42 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.211 16:02:42 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:24.211 16:02:42 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:24.211 16:02:42 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "557d7325-0195-403f-8b10-0c41061975b3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "557d7325-0195-403f-8b10-0c41061975b3",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "908bc232-3fc6-49fa-a3bd-7a111f840cb3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "908bc232-3fc6-49fa-a3bd-7a111f840cb3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "0378eb42-4f08-43db-8766-27965f81f315"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0378eb42-4f08-43db-8766-27965f81f315",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "59e2fd14-778f-474e-9cca-3c54c62e70f1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "59e2fd14-778f-474e-9cca-3c54c62e70f1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fc896c83-148a-4265-a141-6ceb44a8adde"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fc896c83-148a-4265-a141-6ceb44a8adde",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "538be774-6fd7-4804-b5d0-43c1309b6759"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "538be774-6fd7-4804-b5d0-43c1309b6759",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:24.211 16:02:42 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:24.211 16:02:42 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:24.211 16:02:42 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:24.211 16:02:42 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61005 00:08:24.212 16:02:42 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 61005 ']' 00:08:24.212 16:02:42 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 61005 00:08:24.212 16:02:42 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:08:24.212 16:02:42 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:24.212 16:02:42 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61005 00:08:24.212 16:02:42 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:24.212 16:02:42 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:24.212 16:02:42 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61005' 00:08:24.212 killing process with pid 61005 00:08:24.212 16:02:42 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 61005 00:08:24.212 16:02:42 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 61005 00:08:26.750 16:02:45 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:26.750 16:02:45 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:26.750 16:02:45 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:08:26.750 16:02:45 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:26.750 16:02:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.750 ************************************ 00:08:26.750 START TEST bdev_hello_world 00:08:26.750 ************************************ 00:08:26.750 16:02:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:27.009 [2024-11-04 16:02:45.519876] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:08:27.009 [2024-11-04 16:02:45.520011] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61106 ] 00:08:27.009 [2024-11-04 16:02:45.692848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.268 [2024-11-04 16:02:45.811679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.837 [2024-11-04 16:02:46.481608] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:27.837 [2024-11-04 16:02:46.481659] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:27.837 [2024-11-04 16:02:46.481683] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:27.837 [2024-11-04 16:02:46.484698] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:27.837 [2024-11-04 16:02:46.485486] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:27.837 [2024-11-04 16:02:46.485523] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:27.837 [2024-11-04 16:02:46.485687] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:27.837 00:08:27.837 [2024-11-04 16:02:46.485714] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:29.216 00:08:29.216 real 0m2.178s 00:08:29.216 user 0m1.809s 00:08:29.216 sys 0m0.257s 00:08:29.216 16:02:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:29.216 16:02:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:29.216 ************************************ 00:08:29.216 END TEST bdev_hello_world 00:08:29.216 ************************************ 00:08:29.216 16:02:47 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:29.216 16:02:47 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:29.216 16:02:47 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:29.216 16:02:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.216 ************************************ 00:08:29.216 START TEST bdev_bounds 00:08:29.216 ************************************ 00:08:29.216 16:02:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:08:29.216 16:02:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61148 00:08:29.216 16:02:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:29.216 Process bdevio pid: 61148 00:08:29.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.216 16:02:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:29.216 16:02:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61148' 00:08:29.216 16:02:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61148 00:08:29.216 16:02:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61148 ']' 00:08:29.216 16:02:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.216 16:02:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:29.216 16:02:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.216 16:02:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:29.216 16:02:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:29.216 [2024-11-04 16:02:47.782128] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:08:29.216 [2024-11-04 16:02:47.782304] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61148 ] 00:08:29.534 [2024-11-04 16:02:47.982906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:29.534 [2024-11-04 16:02:48.101945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.534 [2024-11-04 16:02:48.102104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.534 [2024-11-04 16:02:48.102133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.112 16:02:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:30.112 16:02:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:08:30.112 16:02:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:30.373 I/O targets: 00:08:30.373 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:30.373 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:30.373 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:30.373 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:30.373 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:30.373 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:30.373 00:08:30.373 00:08:30.373 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.373 http://cunit.sourceforge.net/ 00:08:30.373 00:08:30.373 00:08:30.373 Suite: bdevio tests on: Nvme3n1 00:08:30.373 Test: blockdev write read block ...passed 00:08:30.373 Test: blockdev write zeroes read block ...passed 00:08:30.373 Test: blockdev write zeroes read no split ...passed 00:08:30.373 Test: blockdev write zeroes read split ...passed 00:08:30.373 Test: blockdev write zeroes read split partial ...passed 00:08:30.373 Test: blockdev reset ...[2024-11-04 16:02:48.952737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:30.373 passed 00:08:30.373 Test: blockdev write read 8 blocks ...[2024-11-04 16:02:48.956811] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:30.373 passed 00:08:30.373 Test: blockdev write read size > 128k ...passed 00:08:30.373 Test: blockdev write read invalid size ...passed 00:08:30.373 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.373 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.373 Test: blockdev write read max offset ...passed 00:08:30.373 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.373 Test: blockdev writev readv 8 blocks ...passed 00:08:30.373 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.373 Test: blockdev writev readv block ...passed 00:08:30.373 Test: blockdev writev readv size > 128k ...passed 00:08:30.373 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.373 Test: blockdev comparev and writev ...[2024-11-04 16:02:48.966112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb60a000 len:0x1000 00:08:30.373 [2024-11-04 16:02:48.966159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.373 passed 00:08:30.373 Test: blockdev nvme passthru rw ...passed 00:08:30.373 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.373 Test: blockdev nvme admin passthru ...[2024-11-04 16:02:48.967178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:30.373 [2024-11-04 16:02:48.967221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:30.373 passed 00:08:30.373 Test: blockdev copy ...passed 00:08:30.373 Suite: bdevio tests on: Nvme2n3 00:08:30.373 Test: blockdev write read block ...passed 00:08:30.373 Test: blockdev write zeroes read block ...passed 00:08:30.373 Test: blockdev write zeroes read no split ...passed 00:08:30.373 Test: blockdev write zeroes read split ...passed 00:08:30.373 Test: blockdev write zeroes read split partial ...passed 00:08:30.373 Test: blockdev reset ...[2024-11-04 16:02:49.045366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:30.373 [2024-11-04 16:02:49.050025] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:30.373 Test: blockdev write read 8 blocks ...uccessful. 00:08:30.373 passed 00:08:30.373 Test: blockdev write read size > 128k ...passed 00:08:30.373 Test: blockdev write read invalid size ...passed 00:08:30.373 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.373 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.373 Test: blockdev write read max offset ...passed 00:08:30.373 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.373 Test: blockdev writev readv 8 blocks ...passed 00:08:30.373 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.373 Test: blockdev writev readv block ...passed 00:08:30.373 Test: blockdev writev readv size > 128k ...passed 00:08:30.373 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.373 Test: blockdev comparev and writev ...[2024-11-04 16:02:49.060047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29e806000 len:0x1000 00:08:30.373 [2024-11-04 16:02:49.060091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.373 passed 00:08:30.373 Test: blockdev nvme passthru rw ...passed 00:08:30.373 Test: blockdev nvme passthru vendor specific ...[2024-11-04 16:02:49.061111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:08:30.373 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:08:30.373 [2024-11-04 16:02:49.061254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:30.373 passed 00:08:30.373 Test: blockdev copy ...passed 00:08:30.373 Suite: bdevio tests on: Nvme2n2 00:08:30.374 Test: blockdev write read block ...passed 00:08:30.374 Test: blockdev write zeroes read block ...passed 00:08:30.374 Test: blockdev write zeroes read no split ...passed 00:08:30.632 Test: blockdev write zeroes read split ...passed 00:08:30.632 Test: blockdev write zeroes read split partial ...passed 00:08:30.632 Test: blockdev reset ...[2024-11-04 16:02:49.129921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:30.632 passed 00:08:30.632 Test: blockdev write read 8 blocks ...[2024-11-04 16:02:49.133948] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:30.632 passed 00:08:30.632 Test: blockdev write read size > 128k ...passed 00:08:30.632 Test: blockdev write read invalid size ...passed 00:08:30.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.632 Test: blockdev write read max offset ...passed 00:08:30.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.632 Test: blockdev writev readv 8 blocks ...passed 00:08:30.632 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.632 Test: blockdev writev readv block ...passed 00:08:30.632 Test: blockdev writev readv size > 128k ...passed 00:08:30.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.632 Test: blockdev comparev and writev ...[2024-11-04 16:02:49.142322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e3c000 len:0x1000 00:08:30.632 [2024-11-04 16:02:49.142367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.632 passed 00:08:30.632 Test: blockdev nvme passthru rw ...passed 00:08:30.632 Test: blockdev nvme passthru vendor specific ...[2024-11-04 16:02:49.143177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:08:30.632 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:08:30.632 [2024-11-04 16:02:49.143251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:30.632 passed 00:08:30.632 Test: blockdev copy ...passed 00:08:30.632 Suite: bdevio tests on: Nvme2n1 00:08:30.632 Test: blockdev write read block ...passed 00:08:30.632 Test: blockdev write zeroes read block ...passed 00:08:30.632 Test: blockdev write zeroes read no split ...passed 00:08:30.632 Test: blockdev write zeroes read split ...passed 00:08:30.632 Test: blockdev write zeroes read split partial ...passed 00:08:30.632 Test: blockdev reset ...[2024-11-04 16:02:49.226247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:30.632 [2024-11-04 16:02:49.230534] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:30.632 Test: blockdev write read 8 blocks ...uccessful. 00:08:30.632 passed 00:08:30.632 Test: blockdev write read size > 128k ...passed 00:08:30.632 Test: blockdev write read invalid size ...passed 00:08:30.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.632 Test: blockdev write read max offset ...passed 00:08:30.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.632 Test: blockdev writev readv 8 blocks ...passed 00:08:30.632 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.632 Test: blockdev writev readv block ...passed 00:08:30.632 Test: blockdev writev readv size > 128k ...passed 00:08:30.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.632 Test: blockdev comparev and writev ...[2024-11-04 16:02:49.239165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:08:30.632 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d6e38000 len:0x1000 00:08:30.632 [2024-11-04 16:02:49.239316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.632 passed 00:08:30.632 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.632 Test: blockdev nvme admin passthru ...[2024-11-04 16:02:49.240142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:30.632 [2024-11-04 16:02:49.240180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:30.632 passed 00:08:30.632 Test: blockdev copy ...passed 00:08:30.632 Suite: bdevio tests on: Nvme1n1 00:08:30.632 Test: blockdev write read block ...passed 00:08:30.632 Test: blockdev write zeroes read block ...passed 00:08:30.632 Test: blockdev write zeroes read no split ...passed 00:08:30.632 Test: blockdev write zeroes read split ...passed 00:08:30.632 Test: blockdev write zeroes read split partial ...passed 00:08:30.632 Test: blockdev reset ...[2024-11-04 16:02:49.316091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:30.632 [2024-11-04 16:02:49.320053] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:08:30.632 Test: blockdev write read 8 blocks ...uccessful. 00:08:30.632 passed 00:08:30.632 Test: blockdev write read size > 128k ...passed 00:08:30.632 Test: blockdev write read invalid size ...passed 00:08:30.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.632 Test: blockdev write read max offset ...passed 00:08:30.633 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.633 Test: blockdev writev readv 8 blocks ...passed 00:08:30.633 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.633 Test: blockdev writev readv block ...passed 00:08:30.633 Test: blockdev writev readv size > 128k ...passed 00:08:30.633 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.633 Test: blockdev comparev and writev ...[2024-11-04 16:02:49.328624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e34000 len:0x1000 00:08:30.633 [2024-11-04 16:02:49.328671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:30.633 passed 00:08:30.633 Test: blockdev nvme passthru rw ...passed 00:08:30.633 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.633 Test: blockdev nvme admin passthru ...[2024-11-04 16:02:49.329583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:30.633 [2024-11-04 16:02:49.329619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:30.633 passed 00:08:30.633 Test: blockdev copy ...passed 00:08:30.633 Suite: bdevio tests on: Nvme0n1 00:08:30.633 Test: blockdev write read block ...passed 00:08:30.633 Test: blockdev write zeroes read block ...passed 00:08:30.633 Test: blockdev write zeroes read no split ...passed 00:08:30.891 Test: blockdev write zeroes read split ...passed 00:08:30.891 Test: blockdev write zeroes read split partial ...passed 00:08:30.891 Test: blockdev reset ...[2024-11-04 16:02:49.410016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:30.891 [2024-11-04 16:02:49.413951] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:08:30.891 Test: blockdev write read 8 blocks ...uccessful. 00:08:30.891 passed 00:08:30.891 Test: blockdev write read size > 128k ...passed 00:08:30.891 Test: blockdev write read invalid size ...passed 00:08:30.891 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:30.891 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:30.891 Test: blockdev write read max offset ...passed 00:08:30.891 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:30.891 Test: blockdev writev readv 8 blocks ...passed 00:08:30.891 Test: blockdev writev readv 30 x 1block ...passed 00:08:30.891 Test: blockdev writev readv block ...passed 00:08:30.891 Test: blockdev writev readv size > 128k ...passed 00:08:30.891 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:30.891 Test: blockdev comparev and writev ...passed 00:08:30.891 Test: blockdev nvme passthru rw ...[2024-11-04 16:02:49.422395] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:30.891 separate metadata which is not supported yet. 00:08:30.891 passed 00:08:30.891 Test: blockdev nvme passthru vendor specific ...passed 00:08:30.891 Test: blockdev nvme admin passthru ...[2024-11-04 16:02:49.422982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:30.891 [2024-11-04 16:02:49.423031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:30.891 passed 00:08:30.891 Test: blockdev copy ...passed 00:08:30.891 00:08:30.891 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.891 suites 6 6 n/a 0 0 00:08:30.891 tests 138 138 138 0 0 00:08:30.891 asserts 893 893 893 0 n/a 00:08:30.891 00:08:30.891 Elapsed time = 1.494 seconds 00:08:30.891 0 00:08:30.891 16:02:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61148 00:08:30.891 16:02:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61148 ']' 00:08:30.891 16:02:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61148 00:08:30.891 16:02:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:08:30.891 16:02:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:30.891 16:02:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61148 00:08:30.891 killing process with pid 61148 00:08:30.891 16:02:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:30.891 16:02:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:30.891 16:02:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61148' 00:08:30.891 16:02:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61148 00:08:30.891 16:02:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61148 00:08:31.826 16:02:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:31.826 00:08:31.826 real 0m2.850s 00:08:31.826 user 0m7.201s 00:08:31.826 sys 0m0.423s 00:08:31.826 16:02:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:31.826 ************************************ 00:08:31.826 END TEST bdev_bounds 00:08:31.826 ************************************ 00:08:31.826 16:02:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:32.084 16:02:50 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:32.084 16:02:50 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:32.084 16:02:50 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:32.084 16:02:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.084 ************************************ 00:08:32.084 START TEST bdev_nbd 00:08:32.084 ************************************ 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61213 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61213 /var/tmp/spdk-nbd.sock 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61213 ']' 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:32.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:32.084 16:02:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:32.084 [2024-11-04 16:02:50.697400] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:08:32.085 [2024-11-04 16:02:50.697528] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.344 [2024-11-04 16:02:50.881353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.344 [2024-11-04 16:02:51.001192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.281 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.281 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:33.282 1+0 records in 00:08:33.282 1+0 records out 00:08:33.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579537 s, 7.1 MB/s 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:33.282 16:02:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:33.541 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:33.541 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:33.541 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:33.541 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:33.541 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:33.541 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:33.541 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:33.541 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:33.541 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:33.541 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:33.541 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:33.541 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:33.541 1+0 records in 00:08:33.541 1+0 records out 00:08:33.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444386 s, 9.2 MB/s 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:33.799 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:08:34.058 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:34.058 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:34.058 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:34.058 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.058 1+0 records in 00:08:34.058 1+0 records out 00:08:34.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663426 s, 6.2 MB/s 00:08:34.058 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.058 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:34.058 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.058 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:34.058 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:34.058 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.058 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:34.058 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.317 1+0 records in 00:08:34.317 1+0 records out 00:08:34.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707706 s, 5.8 MB/s 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:34.317 16:02:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.576 1+0 records in 00:08:34.576 1+0 records out 00:08:34.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075182 s, 5.4 MB/s 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:34.576 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.835 1+0 records in 00:08:34.835 1+0 records out 00:08:34.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578639 s, 7.1 MB/s 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:34.835 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:35.094 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:35.094 { 00:08:35.094 "nbd_device": "/dev/nbd0", 00:08:35.094 "bdev_name": "Nvme0n1" 00:08:35.094 }, 00:08:35.094 { 00:08:35.094 "nbd_device": "/dev/nbd1", 00:08:35.094 "bdev_name": "Nvme1n1" 00:08:35.094 }, 00:08:35.094 { 00:08:35.094 "nbd_device": "/dev/nbd2", 00:08:35.094 "bdev_name": "Nvme2n1" 00:08:35.094 }, 00:08:35.094 { 00:08:35.094 "nbd_device": "/dev/nbd3", 00:08:35.094 "bdev_name": "Nvme2n2" 00:08:35.094 }, 00:08:35.094 { 00:08:35.094 "nbd_device": "/dev/nbd4", 00:08:35.094 "bdev_name": "Nvme2n3" 00:08:35.094 }, 00:08:35.094 { 00:08:35.094 "nbd_device": "/dev/nbd5", 00:08:35.094 "bdev_name": "Nvme3n1" 00:08:35.094 } 00:08:35.094 ]' 00:08:35.094 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:35.094 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:35.094 { 00:08:35.094 "nbd_device": "/dev/nbd0", 00:08:35.094 "bdev_name": "Nvme0n1" 00:08:35.094 }, 00:08:35.094 { 00:08:35.094 "nbd_device": "/dev/nbd1", 00:08:35.094 "bdev_name": "Nvme1n1" 00:08:35.094 }, 00:08:35.094 { 00:08:35.094 "nbd_device": "/dev/nbd2", 00:08:35.094 "bdev_name": "Nvme2n1" 00:08:35.094 }, 00:08:35.094 { 00:08:35.094 "nbd_device": "/dev/nbd3", 00:08:35.094 "bdev_name": "Nvme2n2" 00:08:35.094 }, 00:08:35.094 { 00:08:35.094 "nbd_device": "/dev/nbd4", 00:08:35.094 "bdev_name": "Nvme2n3" 00:08:35.094 }, 00:08:35.094 { 00:08:35.094 "nbd_device": "/dev/nbd5", 00:08:35.094 "bdev_name": "Nvme3n1" 00:08:35.094 } 00:08:35.094 ]' 00:08:35.094 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:35.094 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:35.094 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.094 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:35.094 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:35.094 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:35.094 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.094 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:35.353 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:35.353 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:35.353 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:35.353 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:35.353 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:35.353 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:35.353 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:35.353 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:35.353 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.353 16:02:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:35.611 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:35.611 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:35.611 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:35.611 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:35.611 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:35.611 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:35.611 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:35.611 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:35.611 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.611 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:35.870 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:35.870 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:35.870 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:35.870 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:35.870 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:35.870 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:35.870 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:35.870 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:35.870 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.870 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:36.128 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:36.128 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:36.128 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:36.128 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.128 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.128 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:36.128 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.128 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.128 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.128 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:36.128 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:36.129 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:36.129 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:36.129 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.129 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.129 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:36.129 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.129 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.129 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.387 16:02:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:36.387 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:36.387 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:36.387 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:36.387 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.387 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.387 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:36.387 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:36.387 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.387 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:36.387 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.387 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:36.646 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.647 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:36.647 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:36.647 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:36.647 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:36.647 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:36.647 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:36.647 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:36.647 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:36.906 /dev/nbd0 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:36.906 1+0 records in 00:08:36.906 1+0 records out 00:08:36.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062309 s, 6.6 MB/s 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:36.906 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:37.165 /dev/nbd1 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.165 1+0 records in 00:08:37.165 1+0 records out 00:08:37.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568523 s, 7.2 MB/s 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:37.165 16:02:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:37.424 /dev/nbd10 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.425 1+0 records in 00:08:37.425 1+0 records out 00:08:37.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488492 s, 8.4 MB/s 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:37.425 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:37.683 /dev/nbd11 00:08:37.683 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:37.683 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:37.683 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:08:37.683 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:37.683 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:37.683 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:37.683 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:08:37.683 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:37.683 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:37.683 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:37.683 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.683 1+0 records in 00:08:37.683 1+0 records out 00:08:37.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638587 s, 6.4 MB/s 00:08:37.683 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.684 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:37.684 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.684 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:37.684 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:37.684 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.684 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:37.684 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:37.942 /dev/nbd12 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:37.942 1+0 records in 00:08:37.942 1+0 records out 00:08:37.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587628 s, 7.0 MB/s 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:37.942 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:38.201 /dev/nbd13 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:38.201 1+0 records in 00:08:38.201 1+0 records out 00:08:38.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000723131 s, 5.7 MB/s 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.201 16:02:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:38.460 { 00:08:38.460 "nbd_device": "/dev/nbd0", 00:08:38.460 "bdev_name": "Nvme0n1" 00:08:38.460 }, 00:08:38.460 { 00:08:38.460 "nbd_device": "/dev/nbd1", 00:08:38.460 "bdev_name": "Nvme1n1" 00:08:38.460 }, 00:08:38.460 { 00:08:38.460 "nbd_device": "/dev/nbd10", 00:08:38.460 "bdev_name": "Nvme2n1" 00:08:38.460 }, 00:08:38.460 { 00:08:38.460 "nbd_device": "/dev/nbd11", 00:08:38.460 "bdev_name": "Nvme2n2" 00:08:38.460 }, 00:08:38.460 { 00:08:38.460 "nbd_device": "/dev/nbd12", 00:08:38.460 "bdev_name": "Nvme2n3" 00:08:38.460 }, 00:08:38.460 { 00:08:38.460 "nbd_device": "/dev/nbd13", 00:08:38.460 "bdev_name": "Nvme3n1" 00:08:38.460 } 00:08:38.460 ]' 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:38.460 { 00:08:38.460 "nbd_device": "/dev/nbd0", 00:08:38.460 "bdev_name": "Nvme0n1" 00:08:38.460 }, 00:08:38.460 { 00:08:38.460 "nbd_device": "/dev/nbd1", 00:08:38.460 "bdev_name": "Nvme1n1" 00:08:38.460 }, 00:08:38.460 { 00:08:38.460 "nbd_device": "/dev/nbd10", 00:08:38.460 "bdev_name": "Nvme2n1" 00:08:38.460 }, 00:08:38.460 { 00:08:38.460 "nbd_device": "/dev/nbd11", 00:08:38.460 "bdev_name": "Nvme2n2" 00:08:38.460 }, 00:08:38.460 { 00:08:38.460 "nbd_device": "/dev/nbd12", 00:08:38.460 "bdev_name": "Nvme2n3" 00:08:38.460 }, 00:08:38.460 { 00:08:38.460 "nbd_device": "/dev/nbd13", 00:08:38.460 "bdev_name": "Nvme3n1" 00:08:38.460 } 00:08:38.460 ]' 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:38.460 /dev/nbd1 00:08:38.460 /dev/nbd10 00:08:38.460 /dev/nbd11 00:08:38.460 /dev/nbd12 00:08:38.460 /dev/nbd13' 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:38.460 /dev/nbd1 00:08:38.460 /dev/nbd10 00:08:38.460 /dev/nbd11 00:08:38.460 /dev/nbd12 00:08:38.460 /dev/nbd13' 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:38.460 256+0 records in 00:08:38.460 256+0 records out 00:08:38.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139862 s, 75.0 MB/s 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.460 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:38.719 256+0 records in 00:08:38.719 256+0 records out 00:08:38.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119493 s, 8.8 MB/s 00:08:38.719 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.719 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:38.719 256+0 records in 00:08:38.719 256+0 records out 00:08:38.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123382 s, 8.5 MB/s 00:08:38.719 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.719 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:38.978 256+0 records in 00:08:38.978 256+0 records out 00:08:38.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121193 s, 8.7 MB/s 00:08:38.978 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.978 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:38.978 256+0 records in 00:08:38.978 256+0 records out 00:08:38.978 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122192 s, 8.6 MB/s 00:08:38.978 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.978 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:39.238 256+0 records in 00:08:39.238 256+0 records out 00:08:39.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123615 s, 8.5 MB/s 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:39.238 256+0 records in 00:08:39.238 256+0 records out 00:08:39.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12516 s, 8.4 MB/s 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.238 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:39.498 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:39.498 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:39.498 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.498 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:39.498 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:39.498 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:39.498 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.498 16:02:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:39.498 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:39.498 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:39.498 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:39.498 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.498 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.498 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:39.498 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:39.498 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.498 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.498 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:39.756 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:39.756 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:39.756 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:39.756 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.756 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.756 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:39.756 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:39.756 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.756 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.756 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:40.016 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:40.016 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:40.016 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:40.016 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.016 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.016 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:40.016 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:40.016 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.016 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.016 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:40.275 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:40.275 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:40.275 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:40.275 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.275 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.275 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:40.275 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:40.275 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.275 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.275 16:02:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:40.534 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:40.534 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:40.534 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:40.534 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.534 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.534 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:40.534 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:40.534 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.534 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:40.534 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:40.794 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:40.794 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:40.794 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:40.794 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:40.794 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:40.794 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:40.794 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:40.794 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:40.794 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:40.794 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.794 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:40.794 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:40.794 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:41.054 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:41.054 malloc_lvol_verify 00:08:41.314 16:02:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:41.314 745eef66-3e54-42d6-a266-4d17df47edf0 00:08:41.314 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:41.573 410d9566-2ebf-4d8d-b335-f0619d24997e 00:08:41.573 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:41.832 /dev/nbd0 00:08:41.832 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:41.832 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:41.832 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:41.832 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:41.832 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:41.832 mke2fs 1.47.0 (5-Feb-2023) 00:08:41.832 Discarding device blocks: 0/4096 done 00:08:41.832 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:41.832 00:08:41.832 Allocating group tables: 0/1 done 00:08:41.832 Writing inode tables: 0/1 done 00:08:41.832 Creating journal (1024 blocks): done 00:08:41.832 Writing superblocks and filesystem accounting information: 0/1 done 00:08:41.832 00:08:41.832 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:41.832 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.832 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:41.832 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:41.832 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:41.832 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.832 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61213 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61213 ']' 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61213 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61213 00:08:42.091 killing process with pid 61213 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61213' 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61213 00:08:42.091 16:03:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61213 00:08:43.474 16:03:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:43.474 00:08:43.474 real 0m11.454s 00:08:43.474 user 0m14.922s 00:08:43.474 sys 0m4.669s 00:08:43.474 16:03:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:43.474 16:03:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:43.474 ************************************ 00:08:43.474 END TEST bdev_nbd 00:08:43.474 ************************************ 00:08:43.474 16:03:02 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:43.474 16:03:02 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:08:43.474 skipping fio tests on NVMe due to multi-ns failures. 00:08:43.474 16:03:02 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:43.474 16:03:02 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:43.474 16:03:02 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:43.474 16:03:02 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:43.474 16:03:02 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:43.474 16:03:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.474 ************************************ 00:08:43.474 START TEST bdev_verify 00:08:43.474 ************************************ 00:08:43.474 16:03:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:43.734 [2024-11-04 16:03:02.222515] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:08:43.734 [2024-11-04 16:03:02.222661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61602 ] 00:08:43.734 [2024-11-04 16:03:02.407341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:43.994 [2024-11-04 16:03:02.525002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.994 [2024-11-04 16:03:02.525043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.563 Running I/O for 5 seconds... 00:08:46.883 17792.00 IOPS, 69.50 MiB/s [2024-11-04T16:03:06.541Z] 17440.00 IOPS, 68.12 MiB/s [2024-11-04T16:03:07.536Z] 16981.33 IOPS, 66.33 MiB/s [2024-11-04T16:03:08.473Z] 17504.00 IOPS, 68.38 MiB/s [2024-11-04T16:03:08.473Z] 17548.80 IOPS, 68.55 MiB/s 00:08:49.751 Latency(us) 00:08:49.751 [2024-11-04T16:03:08.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.751 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.751 Verification LBA range: start 0x0 length 0xbd0bd 00:08:49.751 Nvme0n1 : 5.06 1315.22 5.14 0.00 0.00 96895.00 24108.83 83380.74 00:08:49.751 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.751 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:49.751 Nvme0n1 : 5.08 1563.47 6.11 0.00 0.00 81703.65 16844.59 75800.67 00:08:49.751 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.751 Verification LBA range: start 0x0 length 0xa0000 00:08:49.751 Nvme1n1 : 5.08 1322.77 5.17 0.00 0.00 96272.75 7053.67 79169.59 00:08:49.751 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.751 Verification LBA range: start 0xa0000 length 0xa0000 00:08:49.751 Nvme1n1 : 5.08 1562.99 6.11 0.00 0.00 81615.12 17160.43 69905.07 00:08:49.751 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.751 Verification LBA range: start 0x0 length 0x80000 00:08:49.751 Nvme2n1 : 5.08 1321.71 5.16 0.00 0.00 96079.14 9896.20 76642.90 00:08:49.751 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.751 Verification LBA range: start 0x80000 length 0x80000 00:08:49.751 Nvme2n1 : 5.08 1562.55 6.10 0.00 0.00 81380.82 15475.97 68641.72 00:08:49.751 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.751 Verification LBA range: start 0x0 length 0x80000 00:08:49.751 Nvme2n2 : 5.09 1321.00 5.16 0.00 0.00 95937.27 12159.69 78327.36 00:08:49.751 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.751 Verification LBA range: start 0x80000 length 0x80000 00:08:49.751 Nvme2n2 : 5.08 1562.05 6.10 0.00 0.00 81261.94 15054.86 71589.53 00:08:49.751 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.751 Verification LBA range: start 0x0 length 0x80000 00:08:49.751 Nvme2n3 : 5.09 1320.46 5.16 0.00 0.00 95783.09 12791.36 82117.40 00:08:49.751 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.751 Verification LBA range: start 0x80000 length 0x80000 00:08:49.751 Nvme2n3 : 5.08 1560.95 6.10 0.00 0.00 81183.12 16949.87 74116.22 00:08:49.751 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.751 Verification LBA range: start 0x0 length 0x20000 00:08:49.751 Nvme3n1 : 5.09 1320.16 5.16 0.00 0.00 95666.87 12580.81 84222.97 00:08:49.751 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.751 Verification LBA range: start 0x20000 length 0x20000 00:08:49.751 Nvme3n1 : 5.09 1560.16 6.09 0.00 0.00 81095.44 15686.53 75379.56 00:08:49.751 [2024-11-04T16:03:08.473Z] =================================================================================================================== 00:08:49.751 [2024-11-04T16:03:08.473Z] Total : 17293.49 67.55 0.00 0.00 88121.96 7053.67 84222.97 00:08:51.140 00:08:51.140 real 0m7.635s 00:08:51.140 user 0m14.104s 00:08:51.140 sys 0m0.307s 00:08:51.140 16:03:09 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.140 16:03:09 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:51.140 ************************************ 00:08:51.140 END TEST bdev_verify 00:08:51.140 ************************************ 00:08:51.140 16:03:09 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:51.140 16:03:09 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:51.140 16:03:09 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.140 16:03:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.140 ************************************ 00:08:51.140 START TEST bdev_verify_big_io 00:08:51.140 ************************************ 00:08:51.140 16:03:09 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:51.400 [2024-11-04 16:03:09.973531] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:08:51.400 [2024-11-04 16:03:09.973702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61700 ] 00:08:51.659 [2024-11-04 16:03:10.172637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:51.659 [2024-11-04 16:03:10.292608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.659 [2024-11-04 16:03:10.292645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.596 Running I/O for 5 seconds... 00:08:56.444 1661.00 IOPS, 103.81 MiB/s [2024-11-04T16:03:15.737Z] 1849.50 IOPS, 115.59 MiB/s [2024-11-04T16:03:16.673Z] 1870.00 IOPS, 116.87 MiB/s [2024-11-04T16:03:17.240Z] 2104.50 IOPS, 131.53 MiB/s [2024-11-04T16:03:17.499Z] 2261.40 IOPS, 141.34 MiB/s 00:08:58.777 Latency(us) 00:08:58.777 [2024-11-04T16:03:17.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.777 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:58.777 Verification LBA range: start 0x0 length 0xbd0b 00:08:58.777 Nvme0n1 : 5.62 108.16 6.76 0.00 0.00 1133658.99 11738.58 1320616.20 00:08:58.777 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:58.777 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:58.777 Nvme0n1 : 5.44 211.41 13.21 0.00 0.00 594755.67 22634.92 599667.56 00:08:58.777 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:58.777 Verification LBA range: start 0x0 length 0xa000 00:08:58.777 Nvme1n1 : 5.62 104.57 6.54 0.00 0.00 1122240.84 45059.29 1846167.54 00:08:58.777 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:58.777 Verification LBA range: start 0xa000 length 0xa000 00:08:58.777 Nvme1n1 : 5.44 211.66 13.23 0.00 0.00 582937.93 68220.61 552502.70 00:08:58.777 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:58.777 Verification LBA range: start 0x0 length 0x8000 00:08:58.777 Nvme2n1 : 5.72 124.00 7.75 0.00 0.00 910142.51 36847.55 1280189.17 00:08:58.777 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:58.777 Verification LBA range: start 0x8000 length 0x8000 00:08:58.777 Nvme2n1 : 5.45 211.56 13.22 0.00 0.00 572340.06 67799.49 542395.94 00:08:58.777 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:58.777 Verification LBA range: start 0x0 length 0x8000 00:08:58.777 Nvme2n2 : 5.87 139.74 8.73 0.00 0.00 775793.07 24845.78 1967448.62 00:08:58.777 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:58.777 Verification LBA range: start 0x8000 length 0x8000 00:08:58.777 Nvme2n2 : 5.51 212.03 13.25 0.00 0.00 557261.51 60640.54 559240.53 00:08:58.777 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:58.777 Verification LBA range: start 0x0 length 0x8000 00:08:58.777 Nvme2n3 : 6.01 179.20 11.20 0.00 0.00 583511.58 14212.63 1994399.97 00:08:58.777 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:58.777 Verification LBA range: start 0x8000 length 0x8000 00:08:58.777 Nvme2n3 : 5.55 226.04 14.13 0.00 0.00 519057.54 8685.49 616512.15 00:08:58.777 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:58.777 Verification LBA range: start 0x0 length 0x2000 00:08:58.777 Nvme3n1 : 6.22 292.80 18.30 0.00 0.00 349415.43 218.78 2034827.00 00:08:58.777 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:58.777 Verification LBA range: start 0x2000 length 0x2000 00:08:58.777 Nvme3n1 : 5.55 230.63 14.41 0.00 0.00 500148.40 4421.71 616512.15 00:08:58.777 [2024-11-04T16:03:17.499Z] =================================================================================================================== 00:08:58.777 [2024-11-04T16:03:17.499Z] Total : 2251.80 140.74 0.00 0.00 614455.74 218.78 2034827.00 00:09:00.679 00:09:00.679 real 0m9.472s 00:09:00.679 user 0m17.620s 00:09:00.679 sys 0m0.384s 00:09:00.679 16:03:19 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.679 16:03:19 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:00.679 ************************************ 00:09:00.679 END TEST bdev_verify_big_io 00:09:00.679 ************************************ 00:09:00.679 16:03:19 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:00.679 16:03:19 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:00.679 16:03:19 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.679 16:03:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.679 ************************************ 00:09:00.679 START TEST bdev_write_zeroes 00:09:00.679 ************************************ 00:09:00.679 16:03:19 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:00.937 [2024-11-04 16:03:19.480329] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:09:00.937 [2024-11-04 16:03:19.480461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61826 ] 00:09:01.195 [2024-11-04 16:03:19.671807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.195 [2024-11-04 16:03:19.785033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.763 Running I/O for 1 seconds... 00:09:03.139 71808.00 IOPS, 280.50 MiB/s 00:09:03.139 Latency(us) 00:09:03.139 [2024-11-04T16:03:21.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.139 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:03.139 Nvme0n1 : 1.02 11894.97 46.46 0.00 0.00 10733.86 8632.85 30109.71 00:09:03.139 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:03.139 Nvme1n1 : 1.02 11882.43 46.42 0.00 0.00 10732.60 8738.13 31162.50 00:09:03.139 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:03.139 Nvme2n1 : 1.02 11870.44 46.37 0.00 0.00 10697.17 8738.13 28846.37 00:09:03.139 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:03.139 Nvme2n2 : 1.03 11859.65 46.33 0.00 0.00 10632.37 8632.85 24214.10 00:09:03.139 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:03.139 Nvme2n3 : 1.03 11902.26 46.49 0.00 0.00 10585.57 6027.21 21792.69 00:09:03.139 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:03.139 Nvme3n1 : 1.03 11891.35 46.45 0.00 0.00 10556.12 6158.80 19687.12 00:09:03.139 [2024-11-04T16:03:21.861Z] =================================================================================================================== 00:09:03.139 [2024-11-04T16:03:21.861Z] Total : 71301.10 278.52 0.00 0.00 10656.13 6027.21 31162.50 00:09:04.075 00:09:04.075 real 0m3.270s 00:09:04.075 user 0m2.892s 00:09:04.075 sys 0m0.264s 00:09:04.075 16:03:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:04.075 16:03:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:04.075 ************************************ 00:09:04.075 END TEST bdev_write_zeroes 00:09:04.075 ************************************ 00:09:04.075 16:03:22 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:04.075 16:03:22 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:04.075 16:03:22 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:04.075 16:03:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.075 ************************************ 00:09:04.075 START TEST bdev_json_nonenclosed 00:09:04.075 ************************************ 00:09:04.075 16:03:22 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:04.334 [2024-11-04 16:03:22.812735] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:09:04.334 [2024-11-04 16:03:22.812881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61879 ] 00:09:04.334 [2024-11-04 16:03:22.996459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.592 [2024-11-04 16:03:23.117399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.592 [2024-11-04 16:03:23.117495] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:04.592 [2024-11-04 16:03:23.117517] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:04.592 [2024-11-04 16:03:23.117529] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:04.852 00:09:04.852 real 0m0.651s 00:09:04.852 user 0m0.412s 00:09:04.852 sys 0m0.135s 00:09:04.852 16:03:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:04.852 16:03:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:04.852 ************************************ 00:09:04.852 END TEST bdev_json_nonenclosed 00:09:04.852 ************************************ 00:09:04.852 16:03:23 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:04.852 16:03:23 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:04.852 16:03:23 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:04.852 16:03:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.852 ************************************ 00:09:04.852 START TEST bdev_json_nonarray 00:09:04.852 ************************************ 00:09:04.852 16:03:23 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:04.852 [2024-11-04 16:03:23.541557] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:09:04.852 [2024-11-04 16:03:23.541687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61905 ] 00:09:05.110 [2024-11-04 16:03:23.724299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.369 [2024-11-04 16:03:23.842617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.369 [2024-11-04 16:03:23.842718] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:05.369 [2024-11-04 16:03:23.842742] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:05.369 [2024-11-04 16:03:23.842775] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:05.632 00:09:05.632 real 0m0.661s 00:09:05.632 user 0m0.407s 00:09:05.632 sys 0m0.145s 00:09:05.632 16:03:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:05.632 16:03:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:05.632 ************************************ 00:09:05.632 END TEST bdev_json_nonarray 00:09:05.632 ************************************ 00:09:05.632 16:03:24 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:09:05.632 16:03:24 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:09:05.632 16:03:24 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:09:05.632 16:03:24 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:05.632 16:03:24 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:09:05.632 16:03:24 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:05.632 16:03:24 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:05.632 16:03:24 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:05.632 16:03:24 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:05.632 16:03:24 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:05.632 16:03:24 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:05.632 00:09:05.632 real 0m43.582s 00:09:05.632 user 1m4.408s 00:09:05.632 sys 0m7.770s 00:09:05.632 16:03:24 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:05.632 16:03:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.632 ************************************ 00:09:05.632 END TEST blockdev_nvme 00:09:05.632 ************************************ 00:09:05.632 16:03:24 -- spdk/autotest.sh@209 -- # uname -s 00:09:05.632 16:03:24 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:09:05.632 16:03:24 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:05.632 16:03:24 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:05.632 16:03:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.632 16:03:24 -- common/autotest_common.sh@10 -- # set +x 00:09:05.632 ************************************ 00:09:05.632 START TEST blockdev_nvme_gpt 00:09:05.632 ************************************ 00:09:05.632 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:05.898 * Looking for test storage... 00:09:05.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:05.898 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:05.898 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:09:05.898 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:05.898 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:09:05.898 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.899 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:09:05.899 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.899 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:09:05.899 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:09:05.899 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.899 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:09:05.899 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.899 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.899 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.899 16:03:24 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:09:05.899 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.899 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:05.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.899 --rc genhtml_branch_coverage=1 00:09:05.899 --rc genhtml_function_coverage=1 00:09:05.899 --rc genhtml_legend=1 00:09:05.899 --rc geninfo_all_blocks=1 00:09:05.899 --rc geninfo_unexecuted_blocks=1 00:09:05.899 00:09:05.899 ' 00:09:05.899 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:05.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.899 --rc genhtml_branch_coverage=1 00:09:05.899 --rc genhtml_function_coverage=1 00:09:05.899 --rc genhtml_legend=1 00:09:05.899 --rc geninfo_all_blocks=1 00:09:05.899 --rc geninfo_unexecuted_blocks=1 00:09:05.899 00:09:05.899 ' 00:09:05.899 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:05.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.899 --rc genhtml_branch_coverage=1 00:09:05.899 --rc genhtml_function_coverage=1 00:09:05.899 --rc genhtml_legend=1 00:09:05.899 --rc geninfo_all_blocks=1 00:09:05.899 --rc geninfo_unexecuted_blocks=1 00:09:05.899 00:09:05.899 ' 00:09:05.899 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:05.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.899 --rc genhtml_branch_coverage=1 00:09:05.899 --rc genhtml_function_coverage=1 00:09:05.899 --rc genhtml_legend=1 00:09:05.899 --rc geninfo_all_blocks=1 00:09:05.899 --rc geninfo_unexecuted_blocks=1 00:09:05.899 00:09:05.899 ' 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61989 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:05.899 16:03:24 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61989 00:09:05.899 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 61989 ']' 00:09:05.899 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.899 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:05.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.899 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.899 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:05.899 16:03:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:06.158 [2024-11-04 16:03:24.621931] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:09:06.158 [2024-11-04 16:03:24.622080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61989 ] 00:09:06.158 [2024-11-04 16:03:24.815842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.418 [2024-11-04 16:03:24.930970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.354 16:03:25 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:07.354 16:03:25 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:09:07.354 16:03:25 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:09:07.354 16:03:25 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:09:07.354 16:03:25 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:07.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:08.179 Waiting for block devices as requested 00:09:08.179 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.179 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.438 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.438 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.709 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:09:13.709 BYT; 00:09:13.709 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:09:13.709 BYT; 00:09:13.709 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:13.709 16:03:32 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:13.709 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:13.710 16:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:09:14.648 The operation has completed successfully. 00:09:14.648 16:03:33 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:09:16.022 The operation has completed successfully. 00:09:16.022 16:03:34 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:16.590 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:17.157 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:17.157 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:17.157 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:17.415 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:17.415 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:09:17.415 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.415 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:17.415 [] 00:09:17.415 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.415 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:09:17.415 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:17.415 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:17.415 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:17.415 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:17.415 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.415 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.982 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.982 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:09:17.982 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.982 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.982 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.982 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:09:17.982 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:09:17.982 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:09:17.982 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.983 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:17.983 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.983 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:09:17.983 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:09:17.983 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6b6264dc-ff5c-4a9f-bb80-abb8f866d7c7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6b6264dc-ff5c-4a9f-bb80-abb8f866d7c7",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "dd5b2b05-67b7-4eba-97e9-d6e4bc1ec009"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dd5b2b05-67b7-4eba-97e9-d6e4bc1ec009",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1cf7a283-e3ba-4e08-b17c-fdbebf32b34b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1cf7a283-e3ba-4e08-b17c-fdbebf32b34b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8f22affe-fd0b-4a07-a447-2ac5dca7b335"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8f22affe-fd0b-4a07-a447-2ac5dca7b335",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "c37c3d08-f29f-46f2-b7af-1b62488a01c2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c37c3d08-f29f-46f2-b7af-1b62488a01c2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:17.983 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:09:17.983 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:09:17.983 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:09:17.983 16:03:36 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61989 00:09:17.983 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 61989 ']' 00:09:17.983 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 61989 00:09:17.983 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:09:18.242 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:18.242 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61989 00:09:18.242 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:18.242 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:18.242 killing process with pid 61989 00:09:18.242 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61989' 00:09:18.242 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 61989 00:09:18.242 16:03:36 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 61989 00:09:20.775 16:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:20.775 16:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:20.775 16:03:39 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:09:20.775 16:03:39 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:20.775 16:03:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:20.775 ************************************ 00:09:20.775 START TEST bdev_hello_world 00:09:20.775 ************************************ 00:09:20.776 16:03:39 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:20.776 [2024-11-04 16:03:39.237248] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:09:20.776 [2024-11-04 16:03:39.237363] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62637 ] 00:09:20.776 [2024-11-04 16:03:39.418615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.034 [2024-11-04 16:03:39.533358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.600 [2024-11-04 16:03:40.186012] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:21.600 [2024-11-04 16:03:40.186058] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:21.600 [2024-11-04 16:03:40.186082] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:21.600 [2024-11-04 16:03:40.189075] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:21.600 [2024-11-04 16:03:40.189842] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:21.600 [2024-11-04 16:03:40.189873] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:21.600 [2024-11-04 16:03:40.190047] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:21.600 00:09:21.600 [2024-11-04 16:03:40.190068] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:22.977 00:09:22.977 real 0m2.155s 00:09:22.977 user 0m1.793s 00:09:22.977 sys 0m0.254s 00:09:22.977 16:03:41 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:22.977 16:03:41 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:22.977 ************************************ 00:09:22.978 END TEST bdev_hello_world 00:09:22.978 ************************************ 00:09:22.978 16:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:22.978 16:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:22.978 16:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:22.978 16:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:22.978 ************************************ 00:09:22.978 START TEST bdev_bounds 00:09:22.978 ************************************ 00:09:22.978 16:03:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:09:22.978 16:03:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62680 00:09:22.978 16:03:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:22.978 16:03:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:22.978 Process bdevio pid: 62680 00:09:22.978 16:03:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62680' 00:09:22.978 16:03:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62680 00:09:22.978 16:03:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 62680 ']' 00:09:22.978 16:03:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.978 16:03:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:22.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.978 16:03:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.978 16:03:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:22.978 16:03:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:22.978 [2024-11-04 16:03:41.463330] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:09:22.978 [2024-11-04 16:03:41.463453] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62680 ] 00:09:22.978 [2024-11-04 16:03:41.646185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:23.236 [2024-11-04 16:03:41.768158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.236 [2024-11-04 16:03:41.771845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.236 [2024-11-04 16:03:41.771862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.803 16:03:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:23.803 16:03:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:09:23.803 16:03:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:24.062 I/O targets: 00:09:24.062 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:24.062 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:09:24.062 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:09:24.062 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:24.062 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:24.062 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:24.062 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:24.062 00:09:24.062 00:09:24.062 CUnit - A unit testing framework for C - Version 2.1-3 00:09:24.062 http://cunit.sourceforge.net/ 00:09:24.062 00:09:24.062 00:09:24.062 Suite: bdevio tests on: Nvme3n1 00:09:24.062 Test: blockdev write read block ...passed 00:09:24.062 Test: blockdev write zeroes read block ...passed 00:09:24.062 Test: blockdev write zeroes read no split ...passed 00:09:24.062 Test: blockdev write zeroes read split ...passed 00:09:24.062 Test: blockdev write zeroes read split partial ...passed 00:09:24.062 Test: blockdev reset ...[2024-11-04 16:03:42.627280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:24.062 [2024-11-04 16:03:42.631957] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:09:24.062 passed 00:09:24.062 Test: blockdev write read 8 blocks ...passed 00:09:24.062 Test: blockdev write read size > 128k ...passed 00:09:24.062 Test: blockdev write read invalid size ...passed 00:09:24.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.062 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.062 Test: blockdev write read max offset ...passed 00:09:24.062 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.062 Test: blockdev writev readv 8 blocks ...passed 00:09:24.062 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.063 Test: blockdev writev readv block ...passed 00:09:24.063 Test: blockdev writev readv size > 128k ...passed 00:09:24.063 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.063 Test: blockdev comparev and writev ...[2024-11-04 16:03:42.641355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9604000 len:0x1000 00:09:24.063 [2024-11-04 16:03:42.641422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:24.063 passed 00:09:24.063 Test: blockdev nvme passthru rw ...passed 00:09:24.063 Test: blockdev nvme passthru vendor specific ...[2024-11-04 16:03:42.642264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:24.063 [2024-11-04 16:03:42.642305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:24.063 passed 00:09:24.063 Test: blockdev nvme admin passthru ...passed 00:09:24.063 Test: blockdev copy ...passed 00:09:24.063 Suite: bdevio tests on: Nvme2n3 00:09:24.063 Test: blockdev write read block ...passed 00:09:24.063 Test: blockdev write zeroes read block ...passed 00:09:24.063 Test: blockdev write zeroes read no split ...passed 00:09:24.063 Test: blockdev write zeroes read split ...passed 00:09:24.063 Test: blockdev write zeroes read split partial ...passed 00:09:24.063 Test: blockdev reset ...[2024-11-04 16:03:42.728506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:24.063 [2024-11-04 16:03:42.733789] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:24.063 passed 00:09:24.063 Test: blockdev write read 8 blocks ...passed 00:09:24.063 Test: blockdev write read size > 128k ...passed 00:09:24.063 Test: blockdev write read invalid size ...passed 00:09:24.063 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.063 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.063 Test: blockdev write read max offset ...passed 00:09:24.063 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.063 Test: blockdev writev readv 8 blocks ...passed 00:09:24.063 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.063 Test: blockdev writev readv block ...passed 00:09:24.063 Test: blockdev writev readv size > 128k ...passed 00:09:24.063 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.063 Test: blockdev comparev and writev ...[2024-11-04 16:03:42.743075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9602000 len:0x1000 00:09:24.063 [2024-11-04 16:03:42.743148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:24.063 passed 00:09:24.063 Test: blockdev nvme passthru rw ...passed 00:09:24.063 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.063 Test: blockdev nvme admin passthru ...[2024-11-04 16:03:42.743940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:24.063 [2024-11-04 16:03:42.743980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:24.063 passed 00:09:24.063 Test: blockdev copy ...passed 00:09:24.063 Suite: bdevio tests on: Nvme2n2 00:09:24.063 Test: blockdev write read block ...passed 00:09:24.063 Test: blockdev write zeroes read block ...passed 00:09:24.063 Test: blockdev write zeroes read no split ...passed 00:09:24.322 Test: blockdev write zeroes read split ...passed 00:09:24.322 Test: blockdev write zeroes read split partial ...passed 00:09:24.322 Test: blockdev reset ...[2024-11-04 16:03:42.827194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:24.322 [2024-11-04 16:03:42.832368] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:24.322 passed 00:09:24.322 Test: blockdev write read 8 blocks ...passed 00:09:24.322 Test: blockdev write read size > 128k ...passed 00:09:24.322 Test: blockdev write read invalid size ...passed 00:09:24.322 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.322 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.322 Test: blockdev write read max offset ...passed 00:09:24.322 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.322 Test: blockdev writev readv 8 blocks ...passed 00:09:24.322 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.322 Test: blockdev writev readv block ...passed 00:09:24.322 Test: blockdev writev readv size > 128k ...passed 00:09:24.322 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.322 Test: blockdev comparev and writev ...[2024-11-04 16:03:42.842221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbc38000 len:0x1000 00:09:24.322 [2024-11-04 16:03:42.842286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:24.322 passed 00:09:24.322 Test: blockdev nvme passthru rw ...passed 00:09:24.322 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.322 Test: blockdev nvme admin passthru ...[2024-11-04 16:03:42.843116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:24.322 [2024-11-04 16:03:42.843158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:24.322 passed 00:09:24.322 Test: blockdev copy ...passed 00:09:24.322 Suite: bdevio tests on: Nvme2n1 00:09:24.322 Test: blockdev write read block ...passed 00:09:24.322 Test: blockdev write zeroes read block ...passed 00:09:24.322 Test: blockdev write zeroes read no split ...passed 00:09:24.322 Test: blockdev write zeroes read split ...passed 00:09:24.322 Test: blockdev write zeroes read split partial ...passed 00:09:24.322 Test: blockdev reset ...[2024-11-04 16:03:42.932370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:24.322 [2024-11-04 16:03:42.937818] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:24.322 passed 00:09:24.322 Test: blockdev write read 8 blocks ...passed 00:09:24.322 Test: blockdev write read size > 128k ...passed 00:09:24.322 Test: blockdev write read invalid size ...passed 00:09:24.322 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.322 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.322 Test: blockdev write read max offset ...passed 00:09:24.322 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.322 Test: blockdev writev readv 8 blocks ...passed 00:09:24.322 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.322 Test: blockdev writev readv block ...passed 00:09:24.322 Test: blockdev writev readv size > 128k ...passed 00:09:24.322 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.322 Test: blockdev comparev and writev ...[2024-11-04 16:03:42.947224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbc34000 len:0x1000 00:09:24.322 [2024-11-04 16:03:42.947292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:24.322 passed 00:09:24.322 Test: blockdev nvme passthru rw ...passed 00:09:24.322 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.322 Test: blockdev nvme admin passthru ...[2024-11-04 16:03:42.948068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:24.322 [2024-11-04 16:03:42.948108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:24.322 passed 00:09:24.322 Test: blockdev copy ...passed 00:09:24.322 Suite: bdevio tests on: Nvme1n1p2 00:09:24.322 Test: blockdev write read block ...passed 00:09:24.322 Test: blockdev write zeroes read block ...passed 00:09:24.322 Test: blockdev write zeroes read no split ...passed 00:09:24.322 Test: blockdev write zeroes read split ...passed 00:09:24.322 Test: blockdev write zeroes read split partial ...passed 00:09:24.322 Test: blockdev reset ...[2024-11-04 16:03:43.031102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:24.322 [2024-11-04 16:03:43.036404] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:24.322 passed 00:09:24.323 Test: blockdev write read 8 blocks ...passed 00:09:24.323 Test: blockdev write read size > 128k ...passed 00:09:24.323 Test: blockdev write read invalid size ...passed 00:09:24.323 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.323 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.323 Test: blockdev write read max offset ...passed 00:09:24.323 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.323 Test: blockdev writev readv 8 blocks ...passed 00:09:24.323 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.323 Test: blockdev writev readv block ...passed 00:09:24.582 Test: blockdev writev readv size > 128k ...passed 00:09:24.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.582 Test: blockdev comparev and writev ...[2024-11-04 16:03:43.046591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:09:24.582 Test: blockdev nvme passthru rw ...passed 00:09:24.582 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.582 Test: blockdev nvme admin passthru ...passed 00:09:24.582 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2cbc30000 len:0x1000 00:09:24.582 [2024-11-04 16:03:43.046797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:24.582 passed 00:09:24.582 Suite: bdevio tests on: Nvme1n1p1 00:09:24.582 Test: blockdev write read block ...passed 00:09:24.582 Test: blockdev write zeroes read block ...passed 00:09:24.582 Test: blockdev write zeroes read no split ...passed 00:09:24.582 Test: blockdev write zeroes read split ...passed 00:09:24.582 Test: blockdev write zeroes read split partial ...passed 00:09:24.582 Test: blockdev reset ...[2024-11-04 16:03:43.136805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:24.582 [2024-11-04 16:03:43.141684] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:09:24.582 Test: blockdev write read 8 blocks ...uccessful. 00:09:24.582 passed 00:09:24.582 Test: blockdev write read size > 128k ...passed 00:09:24.582 Test: blockdev write read invalid size ...passed 00:09:24.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.582 Test: blockdev write read max offset ...passed 00:09:24.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.582 Test: blockdev writev readv 8 blocks ...passed 00:09:24.582 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.582 Test: blockdev writev readv block ...passed 00:09:24.582 Test: blockdev writev readv size > 128k ...passed 00:09:24.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.582 Test: blockdev comparev and writev ...[2024-11-04 16:03:43.152907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b980e000 len:0x1000 00:09:24.582 [2024-11-04 16:03:43.153116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:24.582 passed 00:09:24.582 Test: blockdev nvme passthru rw ...passed 00:09:24.582 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.582 Test: blockdev nvme admin passthru ...passed 00:09:24.582 Test: blockdev copy ...passed 00:09:24.582 Suite: bdevio tests on: Nvme0n1 00:09:24.582 Test: blockdev write read block ...passed 00:09:24.582 Test: blockdev write zeroes read block ...passed 00:09:24.582 Test: blockdev write zeroes read no split ...passed 00:09:24.582 Test: blockdev write zeroes read split ...passed 00:09:24.582 Test: blockdev write zeroes read split partial ...passed 00:09:24.582 Test: blockdev reset ...[2024-11-04 16:03:43.232084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:24.582 [2024-11-04 16:03:43.236991] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:09:24.582 Test: blockdev write read 8 blocks ...uccessful. 00:09:24.582 passed 00:09:24.582 Test: blockdev write read size > 128k ...passed 00:09:24.582 Test: blockdev write read invalid size ...passed 00:09:24.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.582 Test: blockdev write read max offset ...passed 00:09:24.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.582 Test: blockdev writev readv 8 blocks ...passed 00:09:24.582 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.582 Test: blockdev writev readv block ...passed 00:09:24.582 Test: blockdev writev readv size > 128k ...passed 00:09:24.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.582 Test: blockdev comparev and writev ...passed 00:09:24.582 Test: blockdev nvme passthru rw ...[2024-11-04 16:03:43.246276] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:24.582 separate metadata which is not supported yet. 00:09:24.582 passed 00:09:24.582 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.582 Test: blockdev nvme admin passthru ...[2024-11-04 16:03:43.246965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:24.582 [2024-11-04 16:03:43.247022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:24.582 passed 00:09:24.582 Test: blockdev copy ...passed 00:09:24.582 00:09:24.582 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.582 suites 7 7 n/a 0 0 00:09:24.582 tests 161 161 161 0 0 00:09:24.582 asserts 1025 1025 1025 0 n/a 00:09:24.582 00:09:24.582 Elapsed time = 1.919 seconds 00:09:24.582 0 00:09:24.582 16:03:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62680 00:09:24.582 16:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 62680 ']' 00:09:24.582 16:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 62680 00:09:24.582 16:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:09:24.582 16:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:24.582 16:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62680 00:09:24.842 killing process with pid 62680 00:09:24.842 16:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:24.842 16:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:24.842 16:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62680' 00:09:24.842 16:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 62680 00:09:24.842 16:03:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 62680 00:09:25.779 ************************************ 00:09:25.779 END TEST bdev_bounds 00:09:25.779 ************************************ 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:25.779 00:09:25.779 real 0m2.988s 00:09:25.779 user 0m7.619s 00:09:25.779 sys 0m0.426s 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:25.779 16:03:44 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:25.779 16:03:44 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:25.779 16:03:44 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:25.779 16:03:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:25.779 ************************************ 00:09:25.779 START TEST bdev_nbd 00:09:25.779 ************************************ 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62745 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:25.779 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:25.780 16:03:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62745 /var/tmp/spdk-nbd.sock 00:09:25.780 16:03:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 62745 ']' 00:09:25.780 16:03:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:25.780 16:03:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:25.780 16:03:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:25.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:25.780 16:03:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:25.780 16:03:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:26.038 [2024-11-04 16:03:44.547950] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:09:26.038 [2024-11-04 16:03:44.548244] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.038 [2024-11-04 16:03:44.731529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.297 [2024-11-04 16:03:44.847487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:26.865 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:27.124 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:27.124 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:27.124 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:27.124 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:27.124 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:27.124 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:27.124 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:27.124 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:27.124 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:27.124 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:27.124 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:27.124 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.124 1+0 records in 00:09:27.124 1+0 records out 00:09:27.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571296 s, 7.2 MB/s 00:09:27.382 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.382 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:27.382 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.382 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:27.382 16:03:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:27.382 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.382 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:27.382 16:03:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:09:27.382 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:27.382 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:27.382 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:27.382 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:27.382 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:27.382 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:27.382 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:27.382 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:27.382 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:27.382 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:27.382 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:27.382 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.382 1+0 records in 00:09:27.382 1+0 records out 00:09:27.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135806 s, 3.0 MB/s 00:09:27.382 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.641 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:27.641 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.641 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:27.641 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:27.641 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.642 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:27.642 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.901 1+0 records in 00:09:27.901 1+0 records out 00:09:27.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599572 s, 6.8 MB/s 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:27.901 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.160 1+0 records in 00:09:28.160 1+0 records out 00:09:28.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761493 s, 5.4 MB/s 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:28.160 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.419 1+0 records in 00:09:28.419 1+0 records out 00:09:28.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072354 s, 5.7 MB/s 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:28.419 16:03:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.678 1+0 records in 00:09:28.678 1+0 records out 00:09:28.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00080268 s, 5.1 MB/s 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:28.678 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.936 1+0 records in 00:09:28.936 1+0 records out 00:09:28.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000867364 s, 4.7 MB/s 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:28.936 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.194 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:29.194 { 00:09:29.194 "nbd_device": "/dev/nbd0", 00:09:29.194 "bdev_name": "Nvme0n1" 00:09:29.194 }, 00:09:29.194 { 00:09:29.194 "nbd_device": "/dev/nbd1", 00:09:29.194 "bdev_name": "Nvme1n1p1" 00:09:29.194 }, 00:09:29.194 { 00:09:29.194 "nbd_device": "/dev/nbd2", 00:09:29.194 "bdev_name": "Nvme1n1p2" 00:09:29.194 }, 00:09:29.194 { 00:09:29.194 "nbd_device": "/dev/nbd3", 00:09:29.194 "bdev_name": "Nvme2n1" 00:09:29.194 }, 00:09:29.194 { 00:09:29.194 "nbd_device": "/dev/nbd4", 00:09:29.194 "bdev_name": "Nvme2n2" 00:09:29.194 }, 00:09:29.194 { 00:09:29.194 "nbd_device": "/dev/nbd5", 00:09:29.194 "bdev_name": "Nvme2n3" 00:09:29.194 }, 00:09:29.194 { 00:09:29.194 "nbd_device": "/dev/nbd6", 00:09:29.194 "bdev_name": "Nvme3n1" 00:09:29.194 } 00:09:29.194 ]' 00:09:29.194 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:29.194 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:29.194 { 00:09:29.194 "nbd_device": "/dev/nbd0", 00:09:29.194 "bdev_name": "Nvme0n1" 00:09:29.194 }, 00:09:29.194 { 00:09:29.194 "nbd_device": "/dev/nbd1", 00:09:29.194 "bdev_name": "Nvme1n1p1" 00:09:29.194 }, 00:09:29.194 { 00:09:29.194 "nbd_device": "/dev/nbd2", 00:09:29.194 "bdev_name": "Nvme1n1p2" 00:09:29.194 }, 00:09:29.194 { 00:09:29.194 "nbd_device": "/dev/nbd3", 00:09:29.194 "bdev_name": "Nvme2n1" 00:09:29.194 }, 00:09:29.194 { 00:09:29.194 "nbd_device": "/dev/nbd4", 00:09:29.194 "bdev_name": "Nvme2n2" 00:09:29.194 }, 00:09:29.194 { 00:09:29.195 "nbd_device": "/dev/nbd5", 00:09:29.195 "bdev_name": "Nvme2n3" 00:09:29.195 }, 00:09:29.195 { 00:09:29.195 "nbd_device": "/dev/nbd6", 00:09:29.195 "bdev_name": "Nvme3n1" 00:09:29.195 } 00:09:29.195 ]' 00:09:29.195 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:29.195 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:29.195 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.195 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:29.195 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:29.195 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:29.195 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.195 16:03:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:29.453 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:29.453 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:29.453 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:29.453 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.453 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.453 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:29.453 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:29.453 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.453 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.453 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:29.713 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:29.713 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:29.713 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:29.713 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.713 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.713 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:29.713 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:29.713 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.713 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.713 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:29.971 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:29.971 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:29.971 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:29.971 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.971 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.972 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:29.972 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:29.972 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.972 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.972 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:30.231 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:30.231 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:30.231 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:30.231 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:30.231 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:30.231 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:30.231 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:30.231 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:30.231 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:30.231 16:03:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:30.490 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:30.490 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:30.490 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:30.490 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:30.490 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:30.490 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:30.490 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:30.490 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:30.490 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:30.490 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:30.750 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:30.750 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:30.750 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:30.750 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:30.750 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:30.750 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:30.750 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:30.750 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:30.750 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:30.750 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:31.008 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:31.008 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:31.008 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:31.008 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:31.008 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:31.008 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:31.008 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:31.008 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:31.008 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:31.008 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.008 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:31.267 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:31.268 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:31.268 16:03:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:31.526 /dev/nbd0 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.526 1+0 records in 00:09:31.526 1+0 records out 00:09:31.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592588 s, 6.9 MB/s 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:31.526 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:31.786 /dev/nbd1 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.786 1+0 records in 00:09:31.786 1+0 records out 00:09:31.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604553 s, 6.8 MB/s 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:31.786 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:32.045 /dev/nbd10 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.045 1+0 records in 00:09:32.045 1+0 records out 00:09:32.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000739549 s, 5.5 MB/s 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:32.045 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:32.304 /dev/nbd11 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.304 1+0 records in 00:09:32.304 1+0 records out 00:09:32.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000824248 s, 5.0 MB/s 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:32.304 16:03:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:32.564 /dev/nbd12 00:09:32.564 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.565 1+0 records in 00:09:32.565 1+0 records out 00:09:32.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644278 s, 6.4 MB/s 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:32.565 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:32.823 /dev/nbd13 00:09:32.823 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:32.823 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:32.823 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:09:32.823 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:32.823 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:32.823 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:32.823 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:09:32.823 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:32.823 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:32.823 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:32.823 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:32.823 1+0 records in 00:09:32.823 1+0 records out 00:09:32.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000882345 s, 4.6 MB/s 00:09:32.823 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:33.081 /dev/nbd14 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:33.081 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:09:33.340 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:33.340 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:33.340 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:33.340 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:33.340 1+0 records in 00:09:33.340 1+0 records out 00:09:33.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000777294 s, 5.3 MB/s 00:09:33.340 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.340 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:33.340 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:33.340 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:33.340 16:03:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:33.340 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:33.340 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:33.340 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.340 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.341 16:03:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.341 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd0", 00:09:33.341 "bdev_name": "Nvme0n1" 00:09:33.341 }, 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd1", 00:09:33.341 "bdev_name": "Nvme1n1p1" 00:09:33.341 }, 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd10", 00:09:33.341 "bdev_name": "Nvme1n1p2" 00:09:33.341 }, 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd11", 00:09:33.341 "bdev_name": "Nvme2n1" 00:09:33.341 }, 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd12", 00:09:33.341 "bdev_name": "Nvme2n2" 00:09:33.341 }, 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd13", 00:09:33.341 "bdev_name": "Nvme2n3" 00:09:33.341 }, 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd14", 00:09:33.341 "bdev_name": "Nvme3n1" 00:09:33.341 } 00:09:33.341 ]' 00:09:33.341 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:33.341 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd0", 00:09:33.341 "bdev_name": "Nvme0n1" 00:09:33.341 }, 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd1", 00:09:33.341 "bdev_name": "Nvme1n1p1" 00:09:33.341 }, 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd10", 00:09:33.341 "bdev_name": "Nvme1n1p2" 00:09:33.341 }, 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd11", 00:09:33.341 "bdev_name": "Nvme2n1" 00:09:33.341 }, 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd12", 00:09:33.341 "bdev_name": "Nvme2n2" 00:09:33.341 }, 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd13", 00:09:33.341 "bdev_name": "Nvme2n3" 00:09:33.341 }, 00:09:33.341 { 00:09:33.341 "nbd_device": "/dev/nbd14", 00:09:33.341 "bdev_name": "Nvme3n1" 00:09:33.341 } 00:09:33.341 ]' 00:09:33.341 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:33.341 /dev/nbd1 00:09:33.341 /dev/nbd10 00:09:33.341 /dev/nbd11 00:09:33.341 /dev/nbd12 00:09:33.341 /dev/nbd13 00:09:33.341 /dev/nbd14' 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:33.600 /dev/nbd1 00:09:33.600 /dev/nbd10 00:09:33.600 /dev/nbd11 00:09:33.600 /dev/nbd12 00:09:33.600 /dev/nbd13 00:09:33.600 /dev/nbd14' 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:33.600 256+0 records in 00:09:33.600 256+0 records out 00:09:33.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590777 s, 177 MB/s 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:33.600 256+0 records in 00:09:33.600 256+0 records out 00:09:33.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131668 s, 8.0 MB/s 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.600 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:33.859 256+0 records in 00:09:33.859 256+0 records out 00:09:33.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138802 s, 7.6 MB/s 00:09:33.859 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.859 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:33.859 256+0 records in 00:09:33.859 256+0 records out 00:09:33.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135964 s, 7.7 MB/s 00:09:33.859 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:33.859 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:34.118 256+0 records in 00:09:34.118 256+0 records out 00:09:34.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164044 s, 6.4 MB/s 00:09:34.118 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.118 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:34.118 256+0 records in 00:09:34.118 256+0 records out 00:09:34.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145131 s, 7.2 MB/s 00:09:34.118 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.118 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:34.377 256+0 records in 00:09:34.377 256+0 records out 00:09:34.377 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13917 s, 7.5 MB/s 00:09:34.377 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:34.377 16:03:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:34.637 256+0 records in 00:09:34.637 256+0 records out 00:09:34.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136252 s, 7.7 MB/s 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.637 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:34.896 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:34.896 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:34.896 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:34.896 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.896 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.896 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:34.897 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.897 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.897 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.897 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.156 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:35.415 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:35.415 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.415 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.415 16:03:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:35.415 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:35.415 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:35.415 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:35.415 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.415 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.415 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:35.415 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:35.415 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.415 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.415 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:35.674 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:35.674 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:35.674 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:35.674 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.674 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.674 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:35.674 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:35.674 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.674 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.674 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:35.933 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:35.933 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:35.933 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:35.933 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.933 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.933 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:35.933 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:35.933 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.933 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.933 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:36.192 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:36.192 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:36.192 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:36.192 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.192 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.192 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:36.192 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.192 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.192 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:36.192 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.192 16:03:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:36.450 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:36.733 malloc_lvol_verify 00:09:36.734 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:36.993 eb0e1d99-4728-4e3e-8198-32643b5c8a07 00:09:36.993 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:37.250 e00f74a1-7100-4420-bf40-b6e016284677 00:09:37.251 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:37.510 /dev/nbd0 00:09:37.510 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:37.510 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:37.510 16:03:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:37.510 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:37.510 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:37.510 mke2fs 1.47.0 (5-Feb-2023) 00:09:37.510 Discarding device blocks: 0/4096 done 00:09:37.510 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:37.510 00:09:37.510 Allocating group tables: 0/1 done 00:09:37.510 Writing inode tables: 0/1 done 00:09:37.510 Creating journal (1024 blocks): done 00:09:37.510 Writing superblocks and filesystem accounting information: 0/1 done 00:09:37.510 00:09:37.510 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:37.510 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.510 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:37.510 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:37.510 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:37.510 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:37.510 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62745 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 62745 ']' 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 62745 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62745 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:37.769 killing process with pid 62745 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62745' 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 62745 00:09:37.769 16:03:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 62745 00:09:39.143 16:03:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:39.143 00:09:39.143 real 0m13.073s 00:09:39.143 user 0m16.903s 00:09:39.143 sys 0m5.577s 00:09:39.143 16:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:39.143 ************************************ 00:09:39.143 END TEST bdev_nbd 00:09:39.143 ************************************ 00:09:39.143 16:03:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:39.143 16:03:57 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:39.143 16:03:57 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:09:39.143 16:03:57 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:09:39.143 skipping fio tests on NVMe due to multi-ns failures. 00:09:39.143 16:03:57 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:39.143 16:03:57 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:39.143 16:03:57 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:39.143 16:03:57 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:09:39.143 16:03:57 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:39.143 16:03:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:39.143 ************************************ 00:09:39.143 START TEST bdev_verify 00:09:39.143 ************************************ 00:09:39.143 16:03:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:39.143 [2024-11-04 16:03:57.680249] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:09:39.143 [2024-11-04 16:03:57.680365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63178 ] 00:09:39.143 [2024-11-04 16:03:57.860281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:39.400 [2024-11-04 16:03:57.976158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.400 [2024-11-04 16:03:57.976186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.337 Running I/O for 5 seconds... 00:09:42.210 17536.00 IOPS, 68.50 MiB/s [2024-11-04T16:04:02.309Z] 17152.00 IOPS, 67.00 MiB/s [2024-11-04T16:04:03.244Z] 17472.00 IOPS, 68.25 MiB/s [2024-11-04T16:04:04.180Z] 17232.00 IOPS, 67.31 MiB/s [2024-11-04T16:04:04.180Z] 17395.20 IOPS, 67.95 MiB/s 00:09:45.458 Latency(us) 00:09:45.458 [2024-11-04T16:04:04.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.458 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0x0 length 0xbd0bd 00:09:45.458 Nvme0n1 : 5.10 1153.67 4.51 0.00 0.00 110703.68 22003.25 91803.04 00:09:45.458 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:45.458 Nvme0n1 : 5.08 1297.35 5.07 0.00 0.00 98231.46 13423.04 87170.78 00:09:45.458 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0x0 length 0x4ff80 00:09:45.458 Nvme1n1p1 : 5.10 1153.39 4.51 0.00 0.00 110332.33 19581.84 87170.78 00:09:45.458 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:45.458 Nvme1n1p1 : 5.08 1296.98 5.07 0.00 0.00 98144.99 12844.00 84222.97 00:09:45.458 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0x0 length 0x4ff7f 00:09:45.458 Nvme1n1p2 : 5.11 1153.12 4.50 0.00 0.00 110125.93 19055.45 85065.20 00:09:45.458 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:45.458 Nvme1n1p2 : 5.08 1296.51 5.06 0.00 0.00 97903.87 12422.89 80432.94 00:09:45.458 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0x0 length 0x80000 00:09:45.458 Nvme2n1 : 5.11 1152.57 4.50 0.00 0.00 110003.13 20108.23 82959.63 00:09:45.458 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0x80000 length 0x80000 00:09:45.458 Nvme2n1 : 5.09 1296.01 5.06 0.00 0.00 97797.73 12791.36 79590.71 00:09:45.458 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0x0 length 0x80000 00:09:45.458 Nvme2n2 : 5.11 1152.30 4.50 0.00 0.00 109841.68 20213.51 82538.51 00:09:45.458 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0x80000 length 0x80000 00:09:45.458 Nvme2n2 : 5.09 1295.59 5.06 0.00 0.00 97690.81 12896.64 77485.13 00:09:45.458 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0x0 length 0x80000 00:09:45.458 Nvme2n3 : 5.11 1152.05 4.50 0.00 0.00 109711.20 17160.43 87591.89 00:09:45.458 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0x80000 length 0x80000 00:09:45.458 Nvme2n3 : 5.10 1304.98 5.10 0.00 0.00 97109.41 8053.82 80011.82 00:09:45.458 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0x0 length 0x20000 00:09:45.458 Nvme3n1 : 5.11 1151.78 4.50 0.00 0.00 109626.72 16844.59 90539.69 00:09:45.458 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:45.458 Verification LBA range: start 0x20000 length 0x20000 00:09:45.458 Nvme3n1 : 5.10 1304.52 5.10 0.00 0.00 97005.79 7264.23 83380.74 00:09:45.458 [2024-11-04T16:04:04.180Z] =================================================================================================================== 00:09:45.458 [2024-11-04T16:04:04.180Z] Total : 17160.81 67.03 0.00 0.00 103515.64 7264.23 91803.04 00:09:46.836 00:09:46.836 real 0m7.675s 00:09:46.836 user 0m14.194s 00:09:46.836 sys 0m0.309s 00:09:46.836 16:04:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:46.836 16:04:05 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:46.836 ************************************ 00:09:46.836 END TEST bdev_verify 00:09:46.836 ************************************ 00:09:46.836 16:04:05 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:46.836 16:04:05 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:09:46.836 16:04:05 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:46.836 16:04:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:46.836 ************************************ 00:09:46.836 START TEST bdev_verify_big_io 00:09:46.836 ************************************ 00:09:46.836 16:04:05 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:46.836 [2024-11-04 16:04:05.429452] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:09:46.836 [2024-11-04 16:04:05.429570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63276 ] 00:09:47.095 [2024-11-04 16:04:05.609907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:47.095 [2024-11-04 16:04:05.729413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.095 [2024-11-04 16:04:05.729451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.035 Running I/O for 5 seconds... 00:09:52.708 2449.00 IOPS, 153.06 MiB/s [2024-11-04T16:04:12.367Z] 3122.50 IOPS, 195.16 MiB/s [2024-11-04T16:04:12.946Z] 3399.00 IOPS, 212.44 MiB/s [2024-11-04T16:04:12.946Z] 3211.25 IOPS, 200.70 MiB/s 00:09:54.224 Latency(us) 00:09:54.224 [2024-11-04T16:04:12.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.224 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0x0 length 0xbd0b 00:09:54.224 Nvme0n1 : 5.69 90.86 5.68 0.00 0.00 1348655.36 15791.81 1495799.98 00:09:54.224 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:54.224 Nvme0n1 : 5.49 186.87 11.68 0.00 0.00 663997.32 22950.76 640094.59 00:09:54.224 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0x0 length 0x4ff8 00:09:54.224 Nvme1n1p1 : 5.69 100.83 6.30 0.00 0.00 1161834.25 45269.85 1374518.90 00:09:54.224 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:54.224 Nvme1n1p1 : 5.49 197.35 12.33 0.00 0.00 625684.91 49902.11 640094.59 00:09:54.224 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0x0 length 0x4ff7 00:09:54.224 Nvme1n1p2 : 5.76 107.48 6.72 0.00 0.00 1051111.02 35794.76 1138694.58 00:09:54.224 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:54.224 Nvme1n1p2 : 5.49 197.62 12.35 0.00 0.00 615400.28 49902.11 640094.59 00:09:54.224 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0x0 length 0x8000 00:09:54.224 Nvme2n1 : 5.79 114.07 7.13 0.00 0.00 960943.34 32636.40 1091529.72 00:09:54.224 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0x8000 length 0x8000 00:09:54.224 Nvme2n1 : 5.55 201.97 12.62 0.00 0.00 592656.99 29267.48 646832.42 00:09:54.224 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0x0 length 0x8000 00:09:54.224 Nvme2n2 : 6.01 141.40 8.84 0.00 0.00 745201.82 18318.50 2263913.48 00:09:54.224 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0x8000 length 0x8000 00:09:54.224 Nvme2n2 : 5.55 202.12 12.63 0.00 0.00 582510.98 29688.60 626618.91 00:09:54.224 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0x0 length 0x8000 00:09:54.224 Nvme2n3 : 6.18 188.69 11.79 0.00 0.00 542095.26 14107.35 2317816.19 00:09:54.224 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0x8000 length 0x8000 00:09:54.224 Nvme2n3 : 5.55 207.43 12.96 0.00 0.00 561183.53 26846.07 640094.59 00:09:54.224 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0x0 length 0x2000 00:09:54.224 Nvme3n1 : 6.31 256.64 16.04 0.00 0.00 388980.80 799.46 2142632.40 00:09:54.224 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:54.224 Verification LBA range: start 0x2000 length 0x2000 00:09:54.224 Nvme3n1 : 5.58 218.18 13.64 0.00 0.00 525205.53 4263.79 653570.26 00:09:54.224 [2024-11-04T16:04:12.946Z] =================================================================================================================== 00:09:54.224 [2024-11-04T16:04:12.946Z] Total : 2411.51 150.72 0.00 0.00 664279.46 799.46 2317816.19 00:09:56.756 00:09:56.756 real 0m9.598s 00:09:56.756 user 0m17.984s 00:09:56.756 sys 0m0.332s 00:09:56.756 16:04:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:56.756 16:04:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:56.756 ************************************ 00:09:56.756 END TEST bdev_verify_big_io 00:09:56.756 ************************************ 00:09:56.756 16:04:14 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:56.756 16:04:14 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:56.756 16:04:14 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.756 16:04:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:56.756 ************************************ 00:09:56.756 START TEST bdev_write_zeroes 00:09:56.756 ************************************ 00:09:56.756 16:04:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:56.756 [2024-11-04 16:04:15.092794] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:09:56.756 [2024-11-04 16:04:15.092923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63396 ] 00:09:56.756 [2024-11-04 16:04:15.273481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.756 [2024-11-04 16:04:15.387416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.692 Running I/O for 1 seconds... 00:09:58.626 19981.00 IOPS, 78.05 MiB/s 00:09:58.626 Latency(us) 00:09:58.626 [2024-11-04T16:04:17.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.626 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.626 Nvme0n1 : 1.07 1498.09 5.85 0.00 0.00 83274.11 6606.24 254353.38 00:09:58.626 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.626 Nvme1n1p1 : 1.02 3304.41 12.91 0.00 0.00 38614.10 9422.44 117069.93 00:09:58.626 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.626 Nvme1n1p2 : 1.03 3183.15 12.43 0.00 0.00 39879.95 9422.44 117912.16 00:09:58.626 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.626 Nvme2n1 : 1.03 3157.90 12.34 0.00 0.00 39810.15 9527.72 117069.93 00:09:58.626 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.626 Nvme2n2 : 1.03 2975.99 11.62 0.00 0.00 42185.02 10369.95 113701.01 00:09:58.626 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.626 Nvme2n3 : 1.03 3056.02 11.94 0.00 0.00 41050.41 5500.81 113701.01 00:09:58.626 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:58.626 Nvme3n1 : 1.03 3265.96 12.76 0.00 0.00 38252.94 5711.37 112858.78 00:09:58.626 [2024-11-04T16:04:17.348Z] =================================================================================================================== 00:09:58.626 [2024-11-04T16:04:17.348Z] Total : 20441.52 79.85 0.00 0.00 43229.21 5500.81 254353.38 00:10:00.000 00:10:00.000 real 0m3.383s 00:10:00.000 user 0m2.985s 00:10:00.000 sys 0m0.282s 00:10:00.000 16:04:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:00.000 ************************************ 00:10:00.000 END TEST bdev_write_zeroes 00:10:00.000 ************************************ 00:10:00.000 16:04:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:00.000 16:04:18 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:00.000 16:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:10:00.000 16:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:00.000 16:04:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:00.000 ************************************ 00:10:00.000 START TEST bdev_json_nonenclosed 00:10:00.000 ************************************ 00:10:00.000 16:04:18 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:00.000 [2024-11-04 16:04:18.550313] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:10:00.000 [2024-11-04 16:04:18.550428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63455 ] 00:10:00.259 [2024-11-04 16:04:18.731684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.259 [2024-11-04 16:04:18.844843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.259 [2024-11-04 16:04:18.844939] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:00.259 [2024-11-04 16:04:18.844962] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:00.259 [2024-11-04 16:04:18.844974] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:00.518 00:10:00.518 real 0m0.639s 00:10:00.518 user 0m0.398s 00:10:00.518 sys 0m0.136s 00:10:00.518 16:04:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:00.518 16:04:19 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:00.518 ************************************ 00:10:00.518 END TEST bdev_json_nonenclosed 00:10:00.518 ************************************ 00:10:00.518 16:04:19 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:00.518 16:04:19 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:10:00.518 16:04:19 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:00.518 16:04:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:00.518 ************************************ 00:10:00.518 START TEST bdev_json_nonarray 00:10:00.518 ************************************ 00:10:00.518 16:04:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:00.776 [2024-11-04 16:04:19.260690] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:10:00.776 [2024-11-04 16:04:19.260822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63480 ] 00:10:00.776 [2024-11-04 16:04:19.442918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.034 [2024-11-04 16:04:19.560160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.034 [2024-11-04 16:04:19.560259] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:01.034 [2024-11-04 16:04:19.560282] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:01.034 [2024-11-04 16:04:19.560295] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:01.292 00:10:01.292 real 0m0.646s 00:10:01.292 user 0m0.406s 00:10:01.292 sys 0m0.135s 00:10:01.292 16:04:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:01.292 16:04:19 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:01.292 ************************************ 00:10:01.292 END TEST bdev_json_nonarray 00:10:01.292 ************************************ 00:10:01.292 16:04:19 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:10:01.292 16:04:19 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:10:01.292 16:04:19 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:01.292 16:04:19 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:01.292 16:04:19 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:01.292 16:04:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:01.292 ************************************ 00:10:01.292 START TEST bdev_gpt_uuid 00:10:01.292 ************************************ 00:10:01.292 16:04:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:10:01.292 16:04:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:10:01.292 16:04:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:10:01.292 16:04:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:01.292 16:04:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63511 00:10:01.292 16:04:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:01.293 16:04:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63511 00:10:01.293 16:04:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 63511 ']' 00:10:01.293 16:04:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.293 16:04:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:01.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.293 16:04:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.293 16:04:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:01.293 16:04:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:01.551 [2024-11-04 16:04:20.024217] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:10:01.551 [2024-11-04 16:04:20.024399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63511 ] 00:10:01.551 [2024-11-04 16:04:20.223590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.810 [2024-11-04 16:04:20.339611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.747 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:02.747 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:10:02.747 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:02.747 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.747 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:03.008 Some configs were skipped because the RPC state that can call them passed over. 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:10:03.008 { 00:10:03.008 "name": "Nvme1n1p1", 00:10:03.008 "aliases": [ 00:10:03.008 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:03.008 ], 00:10:03.008 "product_name": "GPT Disk", 00:10:03.008 "block_size": 4096, 00:10:03.008 "num_blocks": 655104, 00:10:03.008 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:03.008 "assigned_rate_limits": { 00:10:03.008 "rw_ios_per_sec": 0, 00:10:03.008 "rw_mbytes_per_sec": 0, 00:10:03.008 "r_mbytes_per_sec": 0, 00:10:03.008 "w_mbytes_per_sec": 0 00:10:03.008 }, 00:10:03.008 "claimed": false, 00:10:03.008 "zoned": false, 00:10:03.008 "supported_io_types": { 00:10:03.008 "read": true, 00:10:03.008 "write": true, 00:10:03.008 "unmap": true, 00:10:03.008 "flush": true, 00:10:03.008 "reset": true, 00:10:03.008 "nvme_admin": false, 00:10:03.008 "nvme_io": false, 00:10:03.008 "nvme_io_md": false, 00:10:03.008 "write_zeroes": true, 00:10:03.008 "zcopy": false, 00:10:03.008 "get_zone_info": false, 00:10:03.008 "zone_management": false, 00:10:03.008 "zone_append": false, 00:10:03.008 "compare": true, 00:10:03.008 "compare_and_write": false, 00:10:03.008 "abort": true, 00:10:03.008 "seek_hole": false, 00:10:03.008 "seek_data": false, 00:10:03.008 "copy": true, 00:10:03.008 "nvme_iov_md": false 00:10:03.008 }, 00:10:03.008 "driver_specific": { 00:10:03.008 "gpt": { 00:10:03.008 "base_bdev": "Nvme1n1", 00:10:03.008 "offset_blocks": 256, 00:10:03.008 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:03.008 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:03.008 "partition_name": "SPDK_TEST_first" 00:10:03.008 } 00:10:03.008 } 00:10:03.008 } 00:10:03.008 ]' 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:03.008 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:10:03.272 { 00:10:03.272 "name": "Nvme1n1p2", 00:10:03.272 "aliases": [ 00:10:03.272 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:03.272 ], 00:10:03.272 "product_name": "GPT Disk", 00:10:03.272 "block_size": 4096, 00:10:03.272 "num_blocks": 655103, 00:10:03.272 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:03.272 "assigned_rate_limits": { 00:10:03.272 "rw_ios_per_sec": 0, 00:10:03.272 "rw_mbytes_per_sec": 0, 00:10:03.272 "r_mbytes_per_sec": 0, 00:10:03.272 "w_mbytes_per_sec": 0 00:10:03.272 }, 00:10:03.272 "claimed": false, 00:10:03.272 "zoned": false, 00:10:03.272 "supported_io_types": { 00:10:03.272 "read": true, 00:10:03.272 "write": true, 00:10:03.272 "unmap": true, 00:10:03.272 "flush": true, 00:10:03.272 "reset": true, 00:10:03.272 "nvme_admin": false, 00:10:03.272 "nvme_io": false, 00:10:03.272 "nvme_io_md": false, 00:10:03.272 "write_zeroes": true, 00:10:03.272 "zcopy": false, 00:10:03.272 "get_zone_info": false, 00:10:03.272 "zone_management": false, 00:10:03.272 "zone_append": false, 00:10:03.272 "compare": true, 00:10:03.272 "compare_and_write": false, 00:10:03.272 "abort": true, 00:10:03.272 "seek_hole": false, 00:10:03.272 "seek_data": false, 00:10:03.272 "copy": true, 00:10:03.272 "nvme_iov_md": false 00:10:03.272 }, 00:10:03.272 "driver_specific": { 00:10:03.272 "gpt": { 00:10:03.272 "base_bdev": "Nvme1n1", 00:10:03.272 "offset_blocks": 655360, 00:10:03.272 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:03.272 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:03.272 "partition_name": "SPDK_TEST_second" 00:10:03.272 } 00:10:03.272 } 00:10:03.272 } 00:10:03.272 ]' 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63511 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 63511 ']' 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 63511 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63511 00:10:03.272 killing process with pid 63511 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63511' 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 63511 00:10:03.272 16:04:21 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 63511 00:10:05.835 00:10:05.835 real 0m4.404s 00:10:05.835 user 0m4.485s 00:10:05.835 sys 0m0.575s 00:10:05.835 16:04:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.835 16:04:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:05.835 ************************************ 00:10:05.835 END TEST bdev_gpt_uuid 00:10:05.835 ************************************ 00:10:05.835 16:04:24 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:10:05.835 16:04:24 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:10:05.835 16:04:24 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:10:05.835 16:04:24 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:05.835 16:04:24 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:05.835 16:04:24 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:05.835 16:04:24 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:05.835 16:04:24 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:05.835 16:04:24 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:06.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:06.663 Waiting for block devices as requested 00:10:06.663 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:06.663 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:06.923 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:06.923 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:12.195 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:12.195 16:04:30 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:10:12.195 16:04:30 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:10:12.454 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:12.454 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:12.454 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:12.454 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:12.454 16:04:30 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:12.454 ************************************ 00:10:12.454 END TEST blockdev_nvme_gpt 00:10:12.454 ************************************ 00:10:12.454 00:10:12.454 real 1m6.705s 00:10:12.454 user 1m23.232s 00:10:12.454 sys 0m12.604s 00:10:12.454 16:04:30 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:12.454 16:04:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:12.454 16:04:31 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:12.454 16:04:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:12.454 16:04:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:12.454 16:04:31 -- common/autotest_common.sh@10 -- # set +x 00:10:12.454 ************************************ 00:10:12.454 START TEST nvme 00:10:12.454 ************************************ 00:10:12.454 16:04:31 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:12.454 * Looking for test storage... 00:10:12.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:12.454 16:04:31 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:12.713 16:04:31 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:10:12.714 16:04:31 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:12.714 16:04:31 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:12.714 16:04:31 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.714 16:04:31 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.714 16:04:31 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.714 16:04:31 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.714 16:04:31 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.714 16:04:31 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.714 16:04:31 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.714 16:04:31 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.714 16:04:31 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.714 16:04:31 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.714 16:04:31 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.714 16:04:31 nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:12.714 16:04:31 nvme -- scripts/common.sh@345 -- # : 1 00:10:12.714 16:04:31 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.714 16:04:31 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.714 16:04:31 nvme -- scripts/common.sh@365 -- # decimal 1 00:10:12.714 16:04:31 nvme -- scripts/common.sh@353 -- # local d=1 00:10:12.714 16:04:31 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.714 16:04:31 nvme -- scripts/common.sh@355 -- # echo 1 00:10:12.714 16:04:31 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.714 16:04:31 nvme -- scripts/common.sh@366 -- # decimal 2 00:10:12.714 16:04:31 nvme -- scripts/common.sh@353 -- # local d=2 00:10:12.714 16:04:31 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.714 16:04:31 nvme -- scripts/common.sh@355 -- # echo 2 00:10:12.714 16:04:31 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.714 16:04:31 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.714 16:04:31 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.714 16:04:31 nvme -- scripts/common.sh@368 -- # return 0 00:10:12.714 16:04:31 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.714 16:04:31 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:12.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.714 --rc genhtml_branch_coverage=1 00:10:12.714 --rc genhtml_function_coverage=1 00:10:12.714 --rc genhtml_legend=1 00:10:12.714 --rc geninfo_all_blocks=1 00:10:12.714 --rc geninfo_unexecuted_blocks=1 00:10:12.714 00:10:12.714 ' 00:10:12.714 16:04:31 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:12.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.714 --rc genhtml_branch_coverage=1 00:10:12.714 --rc genhtml_function_coverage=1 00:10:12.714 --rc genhtml_legend=1 00:10:12.714 --rc geninfo_all_blocks=1 00:10:12.714 --rc geninfo_unexecuted_blocks=1 00:10:12.714 00:10:12.714 ' 00:10:12.714 16:04:31 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:12.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.714 --rc genhtml_branch_coverage=1 00:10:12.714 --rc genhtml_function_coverage=1 00:10:12.714 --rc genhtml_legend=1 00:10:12.714 --rc geninfo_all_blocks=1 00:10:12.714 --rc geninfo_unexecuted_blocks=1 00:10:12.714 00:10:12.714 ' 00:10:12.714 16:04:31 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:12.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.714 --rc genhtml_branch_coverage=1 00:10:12.714 --rc genhtml_function_coverage=1 00:10:12.714 --rc genhtml_legend=1 00:10:12.714 --rc geninfo_all_blocks=1 00:10:12.714 --rc geninfo_unexecuted_blocks=1 00:10:12.714 00:10:12.714 ' 00:10:12.714 16:04:31 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:13.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:14.218 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.218 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.218 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.219 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.219 16:04:32 nvme -- nvme/nvme.sh@79 -- # uname 00:10:14.219 16:04:32 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:14.219 16:04:32 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:14.219 16:04:32 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:14.219 16:04:32 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:14.219 16:04:32 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:10:14.219 16:04:32 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:10:14.219 16:04:32 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:14.219 16:04:32 nvme -- common/autotest_common.sh@1073 -- # stubpid=64178 00:10:14.219 Waiting for stub to ready for secondary processes... 00:10:14.219 16:04:32 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:10:14.219 16:04:32 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:14.219 16:04:32 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64178 ]] 00:10:14.219 16:04:32 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:10:14.219 [2024-11-04 16:04:32.922255] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:10:14.219 [2024-11-04 16:04:32.922487] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:15.164 16:04:33 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:15.164 16:04:33 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64178 ]] 00:10:15.164 16:04:33 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:10:15.437 [2024-11-04 16:04:33.937396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.437 [2024-11-04 16:04:34.047878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.437 [2024-11-04 16:04:34.048012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.437 [2024-11-04 16:04:34.048045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.437 [2024-11-04 16:04:34.066726] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:15.437 [2024-11-04 16:04:34.066918] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:15.437 [2024-11-04 16:04:34.083031] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:15.437 [2024-11-04 16:04:34.083632] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:15.437 [2024-11-04 16:04:34.088439] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:15.437 [2024-11-04 16:04:34.088976] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:15.437 [2024-11-04 16:04:34.089306] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:15.437 [2024-11-04 16:04:34.093321] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:15.437 [2024-11-04 16:04:34.093773] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:15.437 [2024-11-04 16:04:34.094060] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:15.437 [2024-11-04 16:04:34.097671] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:15.437 [2024-11-04 16:04:34.098054] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:15.437 [2024-11-04 16:04:34.098320] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:15.437 [2024-11-04 16:04:34.098536] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:15.437 [2024-11-04 16:04:34.098635] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:16.374 16:04:34 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:16.374 16:04:34 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:10:16.374 done. 00:10:16.374 16:04:34 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:16.374 16:04:34 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:10:16.374 16:04:34 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:16.374 16:04:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:16.374 ************************************ 00:10:16.374 START TEST nvme_reset 00:10:16.374 ************************************ 00:10:16.374 16:04:34 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:16.633 Initializing NVMe Controllers 00:10:16.633 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:16.633 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:16.633 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:16.633 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:16.633 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:16.633 00:10:16.633 real 0m0.300s 00:10:16.633 user 0m0.114s 00:10:16.633 ************************************ 00:10:16.633 END TEST nvme_reset 00:10:16.633 ************************************ 00:10:16.633 sys 0m0.146s 00:10:16.633 16:04:35 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:16.633 16:04:35 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:16.633 16:04:35 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:16.633 16:04:35 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:16.633 16:04:35 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:16.633 16:04:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:16.633 ************************************ 00:10:16.633 START TEST nvme_identify 00:10:16.633 ************************************ 00:10:16.633 16:04:35 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:10:16.633 16:04:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:16.633 16:04:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:16.633 16:04:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:16.633 16:04:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:16.633 16:04:35 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:16.633 16:04:35 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:10:16.633 16:04:35 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:16.633 16:04:35 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:16.633 16:04:35 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:16.892 16:04:35 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:16.892 16:04:35 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:16.892 16:04:35 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:17.155 [2024-11-04 16:04:35.637638] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64212 terminated unexpected 00:10:17.155 ===================================================== 00:10:17.155 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:17.155 ===================================================== 00:10:17.155 Controller Capabilities/Features 00:10:17.155 ================================ 00:10:17.155 Vendor ID: 1b36 00:10:17.155 Subsystem Vendor ID: 1af4 00:10:17.155 Serial Number: 12340 00:10:17.155 Model Number: QEMU NVMe Ctrl 00:10:17.155 Firmware Version: 8.0.0 00:10:17.155 Recommended Arb Burst: 6 00:10:17.155 IEEE OUI Identifier: 00 54 52 00:10:17.155 Multi-path I/O 00:10:17.155 May have multiple subsystem ports: No 00:10:17.155 May have multiple controllers: No 00:10:17.155 Associated with SR-IOV VF: No 00:10:17.155 Max Data Transfer Size: 524288 00:10:17.155 Max Number of Namespaces: 256 00:10:17.155 Max Number of I/O Queues: 64 00:10:17.155 NVMe Specification Version (VS): 1.4 00:10:17.155 NVMe Specification Version (Identify): 1.4 00:10:17.155 Maximum Queue Entries: 2048 00:10:17.155 Contiguous Queues Required: Yes 00:10:17.155 Arbitration Mechanisms Supported 00:10:17.155 Weighted Round Robin: Not Supported 00:10:17.155 Vendor Specific: Not Supported 00:10:17.155 Reset Timeout: 7500 ms 00:10:17.155 Doorbell Stride: 4 bytes 00:10:17.155 NVM Subsystem Reset: Not Supported 00:10:17.155 Command Sets Supported 00:10:17.155 NVM Command Set: Supported 00:10:17.155 Boot Partition: Not Supported 00:10:17.155 Memory Page Size Minimum: 4096 bytes 00:10:17.155 Memory Page Size Maximum: 65536 bytes 00:10:17.155 Persistent Memory Region: Not Supported 00:10:17.155 Optional Asynchronous Events Supported 00:10:17.155 Namespace Attribute Notices: Supported 00:10:17.155 Firmware Activation Notices: Not Supported 00:10:17.155 ANA Change Notices: Not Supported 00:10:17.155 PLE Aggregate Log Change Notices: Not Supported 00:10:17.155 LBA Status Info Alert Notices: Not Supported 00:10:17.155 EGE Aggregate Log Change Notices: Not Supported 00:10:17.155 Normal NVM Subsystem Shutdown event: Not Supported 00:10:17.155 Zone Descriptor Change Notices: Not Supported 00:10:17.155 Discovery Log Change Notices: Not Supported 00:10:17.155 Controller Attributes 00:10:17.155 128-bit Host Identifier: Not Supported 00:10:17.155 Non-Operational Permissive Mode: Not Supported 00:10:17.155 NVM Sets: Not Supported 00:10:17.155 Read Recovery Levels: Not Supported 00:10:17.155 Endurance Groups: Not Supported 00:10:17.155 Predictable Latency Mode: Not Supported 00:10:17.155 Traffic Based Keep ALive: Not Supported 00:10:17.155 Namespace Granularity: Not Supported 00:10:17.155 SQ Associations: Not Supported 00:10:17.155 UUID List: Not Supported 00:10:17.155 Multi-Domain Subsystem: Not Supported 00:10:17.155 Fixed Capacity Management: Not Supported 00:10:17.155 Variable Capacity Management: Not Supported 00:10:17.155 Delete Endurance Group: Not Supported 00:10:17.155 Delete NVM Set: Not Supported 00:10:17.155 Extended LBA Formats Supported: Supported 00:10:17.155 Flexible Data Placement Supported: Not Supported 00:10:17.155 00:10:17.155 Controller Memory Buffer Support 00:10:17.155 ================================ 00:10:17.155 Supported: No 00:10:17.155 00:10:17.155 Persistent Memory Region Support 00:10:17.155 ================================ 00:10:17.155 Supported: No 00:10:17.155 00:10:17.155 Admin Command Set Attributes 00:10:17.155 ============================ 00:10:17.155 Security Send/Receive: Not Supported 00:10:17.155 Format NVM: Supported 00:10:17.155 Firmware Activate/Download: Not Supported 00:10:17.155 Namespace Management: Supported 00:10:17.155 Device Self-Test: Not Supported 00:10:17.155 Directives: Supported 00:10:17.155 NVMe-MI: Not Supported 00:10:17.155 Virtualization Management: Not Supported 00:10:17.155 Doorbell Buffer Config: Supported 00:10:17.155 Get LBA Status Capability: Not Supported 00:10:17.155 Command & Feature Lockdown Capability: Not Supported 00:10:17.155 Abort Command Limit: 4 00:10:17.155 Async Event Request Limit: 4 00:10:17.155 Number of Firmware Slots: N/A 00:10:17.155 Firmware Slot 1 Read-Only: N/A 00:10:17.155 Firmware Activation Without Reset: N/A 00:10:17.155 Multiple Update Detection Support: N/A 00:10:17.155 Firmware Update Granularity: No Information Provided 00:10:17.155 Per-Namespace SMART Log: Yes 00:10:17.155 Asymmetric Namespace Access Log Page: Not Supported 00:10:17.155 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:17.155 Command Effects Log Page: Supported 00:10:17.155 Get Log Page Extended Data: Supported 00:10:17.155 Telemetry Log Pages: Not Supported 00:10:17.155 Persistent Event Log Pages: Not Supported 00:10:17.155 Supported Log Pages Log Page: May Support 00:10:17.155 Commands Supported & Effects Log Page: Not Supported 00:10:17.155 Feature Identifiers & Effects Log Page:May Support 00:10:17.155 NVMe-MI Commands & Effects Log Page: May Support 00:10:17.155 Data Area 4 for Telemetry Log: Not Supported 00:10:17.155 Error Log Page Entries Supported: 1 00:10:17.155 Keep Alive: Not Supported 00:10:17.155 00:10:17.155 NVM Command Set Attributes 00:10:17.155 ========================== 00:10:17.155 Submission Queue Entry Size 00:10:17.155 Max: 64 00:10:17.155 Min: 64 00:10:17.155 Completion Queue Entry Size 00:10:17.155 Max: 16 00:10:17.155 Min: 16 00:10:17.155 Number of Namespaces: 256 00:10:17.155 Compare Command: Supported 00:10:17.155 Write Uncorrectable Command: Not Supported 00:10:17.155 Dataset Management Command: Supported 00:10:17.155 Write Zeroes Command: Supported 00:10:17.155 Set Features Save Field: Supported 00:10:17.155 Reservations: Not Supported 00:10:17.155 Timestamp: Supported 00:10:17.155 Copy: Supported 00:10:17.155 Volatile Write Cache: Present 00:10:17.155 Atomic Write Unit (Normal): 1 00:10:17.155 Atomic Write Unit (PFail): 1 00:10:17.155 Atomic Compare & Write Unit: 1 00:10:17.155 Fused Compare & Write: Not Supported 00:10:17.155 Scatter-Gather List 00:10:17.155 SGL Command Set: Supported 00:10:17.155 SGL Keyed: Not Supported 00:10:17.155 SGL Bit Bucket Descriptor: Not Supported 00:10:17.155 SGL Metadata Pointer: Not Supported 00:10:17.155 Oversized SGL: Not Supported 00:10:17.155 SGL Metadata Address: Not Supported 00:10:17.155 SGL Offset: Not Supported 00:10:17.155 Transport SGL Data Block: Not Supported 00:10:17.155 Replay Protected Memory Block: Not Supported 00:10:17.155 00:10:17.155 Firmware Slot Information 00:10:17.155 ========================= 00:10:17.155 Active slot: 1 00:10:17.155 Slot 1 Firmware Revision: 1.0 00:10:17.155 00:10:17.155 00:10:17.155 Commands Supported and Effects 00:10:17.155 ============================== 00:10:17.155 Admin Commands 00:10:17.155 -------------- 00:10:17.155 Delete I/O Submission Queue (00h): Supported 00:10:17.155 Create I/O Submission Queue (01h): Supported 00:10:17.155 Get Log Page (02h): Supported 00:10:17.155 Delete I/O Completion Queue (04h): Supported 00:10:17.155 Create I/O Completion Queue (05h): Supported 00:10:17.155 Identify (06h): Supported 00:10:17.155 Abort (08h): Supported 00:10:17.155 Set Features (09h): Supported 00:10:17.155 Get Features (0Ah): Supported 00:10:17.155 Asynchronous Event Request (0Ch): Supported 00:10:17.155 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:17.155 Directive Send (19h): Supported 00:10:17.155 Directive Receive (1Ah): Supported 00:10:17.155 Virtualization Management (1Ch): Supported 00:10:17.155 Doorbell Buffer Config (7Ch): Supported 00:10:17.155 Format NVM (80h): Supported LBA-Change 00:10:17.155 I/O Commands 00:10:17.155 ------------ 00:10:17.155 Flush (00h): Supported LBA-Change 00:10:17.155 Write (01h): Supported LBA-Change 00:10:17.155 Read (02h): Supported 00:10:17.155 Compare (05h): Supported 00:10:17.155 Write Zeroes (08h): Supported LBA-Change 00:10:17.155 Dataset Management (09h): Supported LBA-Change 00:10:17.155 Unknown (0Ch): Supported 00:10:17.155 Unknown (12h): Supported 00:10:17.155 Copy (19h): Supported LBA-Change 00:10:17.156 Unknown (1Dh): Supported LBA-Change 00:10:17.156 00:10:17.156 Error Log 00:10:17.156 ========= 00:10:17.156 00:10:17.156 Arbitration 00:10:17.156 =========== 00:10:17.156 Arbitration Burst: no limit 00:10:17.156 00:10:17.156 Power Management 00:10:17.156 ================ 00:10:17.156 Number of Power States: 1 00:10:17.156 Current Power State: Power State #0 00:10:17.156 Power State #0: 00:10:17.156 Max Power: 25.00 W 00:10:17.156 Non-Operational State: Operational 00:10:17.156 Entry Latency: 16 microseconds 00:10:17.156 Exit Latency: 4 microseconds 00:10:17.156 Relative Read Throughput: 0 00:10:17.156 Relative Read Latency: 0 00:10:17.156 Relative Write Throughput: 0 00:10:17.156 Relative Write Latency: 0 00:10:17.156 Idle Power[2024-11-04 16:04:35.638710] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64212 terminated unexpected 00:10:17.156 : Not Reported 00:10:17.156 Active Power: Not Reported 00:10:17.156 Non-Operational Permissive Mode: Not Supported 00:10:17.156 00:10:17.156 Health Information 00:10:17.156 ================== 00:10:17.156 Critical Warnings: 00:10:17.156 Available Spare Space: OK 00:10:17.156 Temperature: OK 00:10:17.156 Device Reliability: OK 00:10:17.156 Read Only: No 00:10:17.156 Volatile Memory Backup: OK 00:10:17.156 Current Temperature: 323 Kelvin (50 Celsius) 00:10:17.156 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:17.156 Available Spare: 0% 00:10:17.156 Available Spare Threshold: 0% 00:10:17.156 Life Percentage Used: 0% 00:10:17.156 Data Units Read: 720 00:10:17.156 Data Units Written: 648 00:10:17.156 Host Read Commands: 31384 00:10:17.156 Host Write Commands: 31170 00:10:17.156 Controller Busy Time: 0 minutes 00:10:17.156 Power Cycles: 0 00:10:17.156 Power On Hours: 0 hours 00:10:17.156 Unsafe Shutdowns: 0 00:10:17.156 Unrecoverable Media Errors: 0 00:10:17.156 Lifetime Error Log Entries: 0 00:10:17.156 Warning Temperature Time: 0 minutes 00:10:17.156 Critical Temperature Time: 0 minutes 00:10:17.156 00:10:17.156 Number of Queues 00:10:17.156 ================ 00:10:17.156 Number of I/O Submission Queues: 64 00:10:17.156 Number of I/O Completion Queues: 64 00:10:17.156 00:10:17.156 ZNS Specific Controller Data 00:10:17.156 ============================ 00:10:17.156 Zone Append Size Limit: 0 00:10:17.156 00:10:17.156 00:10:17.156 Active Namespaces 00:10:17.156 ================= 00:10:17.156 Namespace ID:1 00:10:17.156 Error Recovery Timeout: Unlimited 00:10:17.156 Command Set Identifier: NVM (00h) 00:10:17.156 Deallocate: Supported 00:10:17.156 Deallocated/Unwritten Error: Supported 00:10:17.156 Deallocated Read Value: All 0x00 00:10:17.156 Deallocate in Write Zeroes: Not Supported 00:10:17.156 Deallocated Guard Field: 0xFFFF 00:10:17.156 Flush: Supported 00:10:17.156 Reservation: Not Supported 00:10:17.156 Metadata Transferred as: Separate Metadata Buffer 00:10:17.156 Namespace Sharing Capabilities: Private 00:10:17.156 Size (in LBAs): 1548666 (5GiB) 00:10:17.156 Capacity (in LBAs): 1548666 (5GiB) 00:10:17.156 Utilization (in LBAs): 1548666 (5GiB) 00:10:17.156 Thin Provisioning: Not Supported 00:10:17.156 Per-NS Atomic Units: No 00:10:17.156 Maximum Single Source Range Length: 128 00:10:17.156 Maximum Copy Length: 128 00:10:17.156 Maximum Source Range Count: 128 00:10:17.156 NGUID/EUI64 Never Reused: No 00:10:17.156 Namespace Write Protected: No 00:10:17.156 Number of LBA Formats: 8 00:10:17.156 Current LBA Format: LBA Format #07 00:10:17.156 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:17.156 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:17.156 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:17.156 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:17.156 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:17.156 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:17.156 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:17.156 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:17.156 00:10:17.156 NVM Specific Namespace Data 00:10:17.156 =========================== 00:10:17.156 Logical Block Storage Tag Mask: 0 00:10:17.156 Protection Information Capabilities: 00:10:17.156 16b Guard Protection Information Storage Tag Support: No 00:10:17.156 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:17.156 Storage Tag Check Read Support: No 00:10:17.156 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.156 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.156 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.156 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.156 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.156 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.156 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.156 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.156 ===================================================== 00:10:17.156 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:17.156 ===================================================== 00:10:17.156 Controller Capabilities/Features 00:10:17.156 ================================ 00:10:17.156 Vendor ID: 1b36 00:10:17.156 Subsystem Vendor ID: 1af4 00:10:17.156 Serial Number: 12341 00:10:17.156 Model Number: QEMU NVMe Ctrl 00:10:17.156 Firmware Version: 8.0.0 00:10:17.156 Recommended Arb Burst: 6 00:10:17.156 IEEE OUI Identifier: 00 54 52 00:10:17.156 Multi-path I/O 00:10:17.156 May have multiple subsystem ports: No 00:10:17.156 May have multiple controllers: No 00:10:17.156 Associated with SR-IOV VF: No 00:10:17.156 Max Data Transfer Size: 524288 00:10:17.156 Max Number of Namespaces: 256 00:10:17.156 Max Number of I/O Queues: 64 00:10:17.156 NVMe Specification Version (VS): 1.4 00:10:17.156 NVMe Specification Version (Identify): 1.4 00:10:17.156 Maximum Queue Entries: 2048 00:10:17.156 Contiguous Queues Required: Yes 00:10:17.156 Arbitration Mechanisms Supported 00:10:17.156 Weighted Round Robin: Not Supported 00:10:17.156 Vendor Specific: Not Supported 00:10:17.156 Reset Timeout: 7500 ms 00:10:17.156 Doorbell Stride: 4 bytes 00:10:17.156 NVM Subsystem Reset: Not Supported 00:10:17.156 Command Sets Supported 00:10:17.156 NVM Command Set: Supported 00:10:17.156 Boot Partition: Not Supported 00:10:17.156 Memory Page Size Minimum: 4096 bytes 00:10:17.156 Memory Page Size Maximum: 65536 bytes 00:10:17.156 Persistent Memory Region: Not Supported 00:10:17.156 Optional Asynchronous Events Supported 00:10:17.156 Namespace Attribute Notices: Supported 00:10:17.156 Firmware Activation Notices: Not Supported 00:10:17.156 ANA Change Notices: Not Supported 00:10:17.156 PLE Aggregate Log Change Notices: Not Supported 00:10:17.156 LBA Status Info Alert Notices: Not Supported 00:10:17.156 EGE Aggregate Log Change Notices: Not Supported 00:10:17.156 Normal NVM Subsystem Shutdown event: Not Supported 00:10:17.156 Zone Descriptor Change Notices: Not Supported 00:10:17.156 Discovery Log Change Notices: Not Supported 00:10:17.156 Controller Attributes 00:10:17.156 128-bit Host Identifier: Not Supported 00:10:17.156 Non-Operational Permissive Mode: Not Supported 00:10:17.156 NVM Sets: Not Supported 00:10:17.156 Read Recovery Levels: Not Supported 00:10:17.156 Endurance Groups: Not Supported 00:10:17.156 Predictable Latency Mode: Not Supported 00:10:17.156 Traffic Based Keep ALive: Not Supported 00:10:17.156 Namespace Granularity: Not Supported 00:10:17.156 SQ Associations: Not Supported 00:10:17.156 UUID List: Not Supported 00:10:17.156 Multi-Domain Subsystem: Not Supported 00:10:17.156 Fixed Capacity Management: Not Supported 00:10:17.156 Variable Capacity Management: Not Supported 00:10:17.156 Delete Endurance Group: Not Supported 00:10:17.156 Delete NVM Set: Not Supported 00:10:17.156 Extended LBA Formats Supported: Supported 00:10:17.156 Flexible Data Placement Supported: Not Supported 00:10:17.156 00:10:17.156 Controller Memory Buffer Support 00:10:17.156 ================================ 00:10:17.156 Supported: No 00:10:17.156 00:10:17.156 Persistent Memory Region Support 00:10:17.156 ================================ 00:10:17.156 Supported: No 00:10:17.156 00:10:17.156 Admin Command Set Attributes 00:10:17.156 ============================ 00:10:17.156 Security Send/Receive: Not Supported 00:10:17.156 Format NVM: Supported 00:10:17.156 Firmware Activate/Download: Not Supported 00:10:17.156 Namespace Management: Supported 00:10:17.156 Device Self-Test: Not Supported 00:10:17.156 Directives: Supported 00:10:17.156 NVMe-MI: Not Supported 00:10:17.156 Virtualization Management: Not Supported 00:10:17.157 Doorbell Buffer Config: Supported 00:10:17.157 Get LBA Status Capability: Not Supported 00:10:17.157 Command & Feature Lockdown Capability: Not Supported 00:10:17.157 Abort Command Limit: 4 00:10:17.157 Async Event Request Limit: 4 00:10:17.157 Number of Firmware Slots: N/A 00:10:17.157 Firmware Slot 1 Read-Only: N/A 00:10:17.157 Firmware Activation Without Reset: N/A 00:10:17.157 Multiple Update Detection Support: N/A 00:10:17.157 Firmware Update Granularity: No Information Provided 00:10:17.157 Per-Namespace SMART Log: Yes 00:10:17.157 Asymmetric Namespace Access Log Page: Not Supported 00:10:17.157 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:17.157 Command Effects Log Page: Supported 00:10:17.157 Get Log Page Extended Data: Supported 00:10:17.157 Telemetry Log Pages: Not Supported 00:10:17.157 Persistent Event Log Pages: Not Supported 00:10:17.157 Supported Log Pages Log Page: May Support 00:10:17.157 Commands Supported & Effects Log Page: Not Supported 00:10:17.157 Feature Identifiers & Effects Log Page:May Support 00:10:17.157 NVMe-MI Commands & Effects Log Page: May Support 00:10:17.157 Data Area 4 for Telemetry Log: Not Supported 00:10:17.157 Error Log Page Entries Supported: 1 00:10:17.157 Keep Alive: Not Supported 00:10:17.157 00:10:17.157 NVM Command Set Attributes 00:10:17.157 ========================== 00:10:17.157 Submission Queue Entry Size 00:10:17.157 Max: 64 00:10:17.157 Min: 64 00:10:17.157 Completion Queue Entry Size 00:10:17.157 Max: 16 00:10:17.157 Min: 16 00:10:17.157 Number of Namespaces: 256 00:10:17.157 Compare Command: Supported 00:10:17.157 Write Uncorrectable Command: Not Supported 00:10:17.157 Dataset Management Command: Supported 00:10:17.157 Write Zeroes Command: Supported 00:10:17.157 Set Features Save Field: Supported 00:10:17.157 Reservations: Not Supported 00:10:17.157 Timestamp: Supported 00:10:17.157 Copy: Supported 00:10:17.157 Volatile Write Cache: Present 00:10:17.157 Atomic Write Unit (Normal): 1 00:10:17.157 Atomic Write Unit (PFail): 1 00:10:17.157 Atomic Compare & Write Unit: 1 00:10:17.157 Fused Compare & Write: Not Supported 00:10:17.157 Scatter-Gather List 00:10:17.157 SGL Command Set: Supported 00:10:17.157 SGL Keyed: Not Supported 00:10:17.157 SGL Bit Bucket Descriptor: Not Supported 00:10:17.157 SGL Metadata Pointer: Not Supported 00:10:17.157 Oversized SGL: Not Supported 00:10:17.157 SGL Metadata Address: Not Supported 00:10:17.157 SGL Offset: Not Supported 00:10:17.157 Transport SGL Data Block: Not Supported 00:10:17.157 Replay Protected Memory Block: Not Supported 00:10:17.157 00:10:17.157 Firmware Slot Information 00:10:17.157 ========================= 00:10:17.157 Active slot: 1 00:10:17.157 Slot 1 Firmware Revision: 1.0 00:10:17.157 00:10:17.157 00:10:17.157 Commands Supported and Effects 00:10:17.157 ============================== 00:10:17.157 Admin Commands 00:10:17.157 -------------- 00:10:17.157 Delete I/O Submission Queue (00h): Supported 00:10:17.157 Create I/O Submission Queue (01h): Supported 00:10:17.157 Get Log Page (02h): Supported 00:10:17.157 Delete I/O Completion Queue (04h): Supported 00:10:17.157 Create I/O Completion Queue (05h): Supported 00:10:17.157 Identify (06h): Supported 00:10:17.157 Abort (08h): Supported 00:10:17.157 Set Features (09h): Supported 00:10:17.157 Get Features (0Ah): Supported 00:10:17.157 Asynchronous Event Request (0Ch): Supported 00:10:17.157 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:17.157 Directive Send (19h): Supported 00:10:17.157 Directive Receive (1Ah): Supported 00:10:17.157 Virtualization Management (1Ch): Supported 00:10:17.157 Doorbell Buffer Config (7Ch): Supported 00:10:17.157 Format NVM (80h): Supported LBA-Change 00:10:17.157 I/O Commands 00:10:17.157 ------------ 00:10:17.157 Flush (00h): Supported LBA-Change 00:10:17.157 Write (01h): Supported LBA-Change 00:10:17.157 Read (02h): Supported 00:10:17.157 Compare (05h): Supported 00:10:17.157 Write Zeroes (08h): Supported LBA-Change 00:10:17.157 Dataset Management (09h): Supported LBA-Change 00:10:17.157 Unknown (0Ch): Supported 00:10:17.157 Unknown (12h): Supported 00:10:17.157 Copy (19h): Supported LBA-Change 00:10:17.157 Unknown (1Dh): Supported LBA-Change 00:10:17.157 00:10:17.157 Error Log 00:10:17.157 ========= 00:10:17.157 00:10:17.157 Arbitration 00:10:17.157 =========== 00:10:17.157 Arbitration Burst: no limit 00:10:17.157 00:10:17.157 Power Management 00:10:17.157 ================ 00:10:17.157 Number of Power States: 1 00:10:17.157 Current Power State: Power State #0 00:10:17.157 Power State #0: 00:10:17.157 Max Power: 25.00 W 00:10:17.157 Non-Operational State: Operational 00:10:17.157 Entry Latency: 16 microseconds 00:10:17.157 Exit Latency: 4 microseconds 00:10:17.157 Relative Read Throughput: 0 00:10:17.157 Relative Read Latency: 0 00:10:17.157 Relative Write Throughput: 0 00:10:17.157 Relative Write Latency: 0 00:10:17.157 Idle Power: Not Reported 00:10:17.157 Active Power: Not Reported 00:10:17.157 Non-Operational Permissive Mode: Not Supported 00:10:17.157 00:10:17.157 Health Information 00:10:17.157 ================== 00:10:17.157 Critical Warnings: 00:10:17.157 Available Spare Space: OK 00:10:17.157 Temperature: [2024-11-04 16:04:35.639383] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64212 terminated unexpected 00:10:17.157 OK 00:10:17.157 Device Reliability: OK 00:10:17.157 Read Only: No 00:10:17.157 Volatile Memory Backup: OK 00:10:17.157 Current Temperature: 323 Kelvin (50 Celsius) 00:10:17.157 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:17.157 Available Spare: 0% 00:10:17.157 Available Spare Threshold: 0% 00:10:17.157 Life Percentage Used: 0% 00:10:17.157 Data Units Read: 1109 00:10:17.157 Data Units Written: 983 00:10:17.157 Host Read Commands: 46645 00:10:17.157 Host Write Commands: 45548 00:10:17.157 Controller Busy Time: 0 minutes 00:10:17.157 Power Cycles: 0 00:10:17.157 Power On Hours: 0 hours 00:10:17.157 Unsafe Shutdowns: 0 00:10:17.157 Unrecoverable Media Errors: 0 00:10:17.157 Lifetime Error Log Entries: 0 00:10:17.157 Warning Temperature Time: 0 minutes 00:10:17.157 Critical Temperature Time: 0 minutes 00:10:17.157 00:10:17.157 Number of Queues 00:10:17.157 ================ 00:10:17.157 Number of I/O Submission Queues: 64 00:10:17.157 Number of I/O Completion Queues: 64 00:10:17.157 00:10:17.157 ZNS Specific Controller Data 00:10:17.157 ============================ 00:10:17.157 Zone Append Size Limit: 0 00:10:17.157 00:10:17.157 00:10:17.157 Active Namespaces 00:10:17.157 ================= 00:10:17.157 Namespace ID:1 00:10:17.157 Error Recovery Timeout: Unlimited 00:10:17.157 Command Set Identifier: NVM (00h) 00:10:17.157 Deallocate: Supported 00:10:17.157 Deallocated/Unwritten Error: Supported 00:10:17.157 Deallocated Read Value: All 0x00 00:10:17.157 Deallocate in Write Zeroes: Not Supported 00:10:17.157 Deallocated Guard Field: 0xFFFF 00:10:17.157 Flush: Supported 00:10:17.157 Reservation: Not Supported 00:10:17.157 Namespace Sharing Capabilities: Private 00:10:17.157 Size (in LBAs): 1310720 (5GiB) 00:10:17.157 Capacity (in LBAs): 1310720 (5GiB) 00:10:17.157 Utilization (in LBAs): 1310720 (5GiB) 00:10:17.157 Thin Provisioning: Not Supported 00:10:17.157 Per-NS Atomic Units: No 00:10:17.157 Maximum Single Source Range Length: 128 00:10:17.157 Maximum Copy Length: 128 00:10:17.157 Maximum Source Range Count: 128 00:10:17.157 NGUID/EUI64 Never Reused: No 00:10:17.157 Namespace Write Protected: No 00:10:17.157 Number of LBA Formats: 8 00:10:17.157 Current LBA Format: LBA Format #04 00:10:17.157 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:17.157 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:17.157 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:17.157 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:17.157 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:17.157 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:17.157 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:17.157 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:17.157 00:10:17.157 NVM Specific Namespace Data 00:10:17.157 =========================== 00:10:17.157 Logical Block Storage Tag Mask: 0 00:10:17.157 Protection Information Capabilities: 00:10:17.157 16b Guard Protection Information Storage Tag Support: No 00:10:17.157 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:17.157 Storage Tag Check Read Support: No 00:10:17.158 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.158 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.158 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.158 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.158 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.158 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.158 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.158 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.158 ===================================================== 00:10:17.158 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:17.158 ===================================================== 00:10:17.158 Controller Capabilities/Features 00:10:17.158 ================================ 00:10:17.158 Vendor ID: 1b36 00:10:17.158 Subsystem Vendor ID: 1af4 00:10:17.158 Serial Number: 12343 00:10:17.158 Model Number: QEMU NVMe Ctrl 00:10:17.158 Firmware Version: 8.0.0 00:10:17.158 Recommended Arb Burst: 6 00:10:17.158 IEEE OUI Identifier: 00 54 52 00:10:17.158 Multi-path I/O 00:10:17.158 May have multiple subsystem ports: No 00:10:17.158 May have multiple controllers: Yes 00:10:17.158 Associated with SR-IOV VF: No 00:10:17.158 Max Data Transfer Size: 524288 00:10:17.158 Max Number of Namespaces: 256 00:10:17.158 Max Number of I/O Queues: 64 00:10:17.158 NVMe Specification Version (VS): 1.4 00:10:17.158 NVMe Specification Version (Identify): 1.4 00:10:17.158 Maximum Queue Entries: 2048 00:10:17.158 Contiguous Queues Required: Yes 00:10:17.158 Arbitration Mechanisms Supported 00:10:17.158 Weighted Round Robin: Not Supported 00:10:17.158 Vendor Specific: Not Supported 00:10:17.158 Reset Timeout: 7500 ms 00:10:17.158 Doorbell Stride: 4 bytes 00:10:17.158 NVM Subsystem Reset: Not Supported 00:10:17.158 Command Sets Supported 00:10:17.158 NVM Command Set: Supported 00:10:17.158 Boot Partition: Not Supported 00:10:17.158 Memory Page Size Minimum: 4096 bytes 00:10:17.158 Memory Page Size Maximum: 65536 bytes 00:10:17.158 Persistent Memory Region: Not Supported 00:10:17.158 Optional Asynchronous Events Supported 00:10:17.158 Namespace Attribute Notices: Supported 00:10:17.158 Firmware Activation Notices: Not Supported 00:10:17.158 ANA Change Notices: Not Supported 00:10:17.158 PLE Aggregate Log Change Notices: Not Supported 00:10:17.158 LBA Status Info Alert Notices: Not Supported 00:10:17.158 EGE Aggregate Log Change Notices: Not Supported 00:10:17.158 Normal NVM Subsystem Shutdown event: Not Supported 00:10:17.158 Zone Descriptor Change Notices: Not Supported 00:10:17.158 Discovery Log Change Notices: Not Supported 00:10:17.158 Controller Attributes 00:10:17.158 128-bit Host Identifier: Not Supported 00:10:17.158 Non-Operational Permissive Mode: Not Supported 00:10:17.158 NVM Sets: Not Supported 00:10:17.158 Read Recovery Levels: Not Supported 00:10:17.158 Endurance Groups: Supported 00:10:17.158 Predictable Latency Mode: Not Supported 00:10:17.158 Traffic Based Keep ALive: Not Supported 00:10:17.158 Namespace Granularity: Not Supported 00:10:17.158 SQ Associations: Not Supported 00:10:17.158 UUID List: Not Supported 00:10:17.158 Multi-Domain Subsystem: Not Supported 00:10:17.158 Fixed Capacity Management: Not Supported 00:10:17.158 Variable Capacity Management: Not Supported 00:10:17.158 Delete Endurance Group: Not Supported 00:10:17.158 Delete NVM Set: Not Supported 00:10:17.158 Extended LBA Formats Supported: Supported 00:10:17.158 Flexible Data Placement Supported: Supported 00:10:17.158 00:10:17.158 Controller Memory Buffer Support 00:10:17.158 ================================ 00:10:17.158 Supported: No 00:10:17.158 00:10:17.158 Persistent Memory Region Support 00:10:17.158 ================================ 00:10:17.158 Supported: No 00:10:17.158 00:10:17.158 Admin Command Set Attributes 00:10:17.158 ============================ 00:10:17.158 Security Send/Receive: Not Supported 00:10:17.158 Format NVM: Supported 00:10:17.158 Firmware Activate/Download: Not Supported 00:10:17.158 Namespace Management: Supported 00:10:17.158 Device Self-Test: Not Supported 00:10:17.158 Directives: Supported 00:10:17.158 NVMe-MI: Not Supported 00:10:17.158 Virtualization Management: Not Supported 00:10:17.158 Doorbell Buffer Config: Supported 00:10:17.158 Get LBA Status Capability: Not Supported 00:10:17.158 Command & Feature Lockdown Capability: Not Supported 00:10:17.158 Abort Command Limit: 4 00:10:17.158 Async Event Request Limit: 4 00:10:17.158 Number of Firmware Slots: N/A 00:10:17.158 Firmware Slot 1 Read-Only: N/A 00:10:17.158 Firmware Activation Without Reset: N/A 00:10:17.158 Multiple Update Detection Support: N/A 00:10:17.158 Firmware Update Granularity: No Information Provided 00:10:17.158 Per-Namespace SMART Log: Yes 00:10:17.158 Asymmetric Namespace Access Log Page: Not Supported 00:10:17.158 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:17.158 Command Effects Log Page: Supported 00:10:17.158 Get Log Page Extended Data: Supported 00:10:17.158 Telemetry Log Pages: Not Supported 00:10:17.158 Persistent Event Log Pages: Not Supported 00:10:17.158 Supported Log Pages Log Page: May Support 00:10:17.158 Commands Supported & Effects Log Page: Not Supported 00:10:17.158 Feature Identifiers & Effects Log Page:May Support 00:10:17.158 NVMe-MI Commands & Effects Log Page: May Support 00:10:17.158 Data Area 4 for Telemetry Log: Not Supported 00:10:17.158 Error Log Page Entries Supported: 1 00:10:17.158 Keep Alive: Not Supported 00:10:17.158 00:10:17.158 NVM Command Set Attributes 00:10:17.158 ========================== 00:10:17.158 Submission Queue Entry Size 00:10:17.158 Max: 64 00:10:17.158 Min: 64 00:10:17.158 Completion Queue Entry Size 00:10:17.158 Max: 16 00:10:17.158 Min: 16 00:10:17.158 Number of Namespaces: 256 00:10:17.158 Compare Command: Supported 00:10:17.158 Write Uncorrectable Command: Not Supported 00:10:17.158 Dataset Management Command: Supported 00:10:17.158 Write Zeroes Command: Supported 00:10:17.158 Set Features Save Field: Supported 00:10:17.158 Reservations: Not Supported 00:10:17.158 Timestamp: Supported 00:10:17.158 Copy: Supported 00:10:17.158 Volatile Write Cache: Present 00:10:17.158 Atomic Write Unit (Normal): 1 00:10:17.158 Atomic Write Unit (PFail): 1 00:10:17.158 Atomic Compare & Write Unit: 1 00:10:17.158 Fused Compare & Write: Not Supported 00:10:17.158 Scatter-Gather List 00:10:17.158 SGL Command Set: Supported 00:10:17.158 SGL Keyed: Not Supported 00:10:17.158 SGL Bit Bucket Descriptor: Not Supported 00:10:17.158 SGL Metadata Pointer: Not Supported 00:10:17.158 Oversized SGL: Not Supported 00:10:17.158 SGL Metadata Address: Not Supported 00:10:17.158 SGL Offset: Not Supported 00:10:17.158 Transport SGL Data Block: Not Supported 00:10:17.158 Replay Protected Memory Block: Not Supported 00:10:17.158 00:10:17.158 Firmware Slot Information 00:10:17.158 ========================= 00:10:17.158 Active slot: 1 00:10:17.158 Slot 1 Firmware Revision: 1.0 00:10:17.158 00:10:17.158 00:10:17.158 Commands Supported and Effects 00:10:17.158 ============================== 00:10:17.158 Admin Commands 00:10:17.158 -------------- 00:10:17.158 Delete I/O Submission Queue (00h): Supported 00:10:17.158 Create I/O Submission Queue (01h): Supported 00:10:17.158 Get Log Page (02h): Supported 00:10:17.158 Delete I/O Completion Queue (04h): Supported 00:10:17.158 Create I/O Completion Queue (05h): Supported 00:10:17.158 Identify (06h): Supported 00:10:17.158 Abort (08h): Supported 00:10:17.158 Set Features (09h): Supported 00:10:17.158 Get Features (0Ah): Supported 00:10:17.158 Asynchronous Event Request (0Ch): Supported 00:10:17.158 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:17.158 Directive Send (19h): Supported 00:10:17.158 Directive Receive (1Ah): Supported 00:10:17.158 Virtualization Management (1Ch): Supported 00:10:17.158 Doorbell Buffer Config (7Ch): Supported 00:10:17.158 Format NVM (80h): Supported LBA-Change 00:10:17.158 I/O Commands 00:10:17.158 ------------ 00:10:17.158 Flush (00h): Supported LBA-Change 00:10:17.158 Write (01h): Supported LBA-Change 00:10:17.158 Read (02h): Supported 00:10:17.158 Compare (05h): Supported 00:10:17.158 Write Zeroes (08h): Supported LBA-Change 00:10:17.158 Dataset Management (09h): Supported LBA-Change 00:10:17.158 Unknown (0Ch): Supported 00:10:17.158 Unknown (12h): Supported 00:10:17.158 Copy (19h): Supported LBA-Change 00:10:17.159 Unknown (1Dh): Supported LBA-Change 00:10:17.159 00:10:17.159 Error Log 00:10:17.159 ========= 00:10:17.159 00:10:17.159 Arbitration 00:10:17.159 =========== 00:10:17.159 Arbitration Burst: no limit 00:10:17.159 00:10:17.159 Power Management 00:10:17.159 ================ 00:10:17.159 Number of Power States: 1 00:10:17.159 Current Power State: Power State #0 00:10:17.159 Power State #0: 00:10:17.159 Max Power: 25.00 W 00:10:17.159 Non-Operational State: Operational 00:10:17.159 Entry Latency: 16 microseconds 00:10:17.159 Exit Latency: 4 microseconds 00:10:17.159 Relative Read Throughput: 0 00:10:17.159 Relative Read Latency: 0 00:10:17.159 Relative Write Throughput: 0 00:10:17.159 Relative Write Latency: 0 00:10:17.159 Idle Power: Not Reported 00:10:17.159 Active Power: Not Reported 00:10:17.159 Non-Operational Permissive Mode: Not Supported 00:10:17.159 00:10:17.159 Health Information 00:10:17.159 ================== 00:10:17.159 Critical Warnings: 00:10:17.159 Available Spare Space: OK 00:10:17.159 Temperature: OK 00:10:17.159 Device Reliability: OK 00:10:17.159 Read Only: No 00:10:17.159 Volatile Memory Backup: OK 00:10:17.159 Current Temperature: 323 Kelvin (50 Celsius) 00:10:17.159 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:17.159 Available Spare: 0% 00:10:17.159 Available Spare Threshold: 0% 00:10:17.159 Life Percentage Used: 0% 00:10:17.159 Data Units Read: 1058 00:10:17.159 Data Units Written: 987 00:10:17.159 Host Read Commands: 34261 00:10:17.159 Host Write Commands: 33684 00:10:17.159 Controller Busy Time: 0 minutes 00:10:17.159 Power Cycles: 0 00:10:17.159 Power On Hours: 0 hours 00:10:17.159 Unsafe Shutdowns: 0 00:10:17.159 Unrecoverable Media Errors: 0 00:10:17.159 Lifetime Error Log Entries: 0 00:10:17.159 Warning Temperature Time: 0 minutes 00:10:17.159 Critical Temperature Time: 0 minutes 00:10:17.159 00:10:17.159 Number of Queues 00:10:17.159 ================ 00:10:17.159 Number of I/O Submission Queues: 64 00:10:17.159 Number of I/O Completion Queues: 64 00:10:17.159 00:10:17.159 ZNS Specific Controller Data 00:10:17.159 ============================ 00:10:17.159 Zone Append Size Limit: 0 00:10:17.159 00:10:17.159 00:10:17.159 Active Namespaces 00:10:17.159 ================= 00:10:17.159 Namespace ID:1 00:10:17.159 Error Recovery Timeout: Unlimited 00:10:17.159 Command Set Identifier: NVM (00h) 00:10:17.159 Deallocate: Supported 00:10:17.159 Deallocated/Unwritten Error: Supported 00:10:17.159 Deallocated Read Value: All 0x00 00:10:17.159 Deallocate in Write Zeroes: Not Supported 00:10:17.159 Deallocated Guard Field: 0xFFFF 00:10:17.159 Flush: Supported 00:10:17.159 Reservation: Not Supported 00:10:17.159 Namespace Sharing Capabilities: Multiple Controllers 00:10:17.159 Size (in LBAs): 262144 (1GiB) 00:10:17.159 Capacity (in LBAs): 262144 (1GiB) 00:10:17.159 Utilization (in LBAs): 262144 (1GiB) 00:10:17.159 Thin Provisioning: Not Supported 00:10:17.159 Per-NS Atomic Units: No 00:10:17.159 Maximum Single Source Range Length: 128 00:10:17.159 Maximum Copy Length: 128 00:10:17.159 Maximum Source Range Count: 128 00:10:17.159 NGUID/EUI64 Never Reused: No 00:10:17.159 Namespace Write Protected: No 00:10:17.159 Endurance group ID: 1 00:10:17.159 Number of LBA Formats: 8 00:10:17.159 Current LBA Format: LBA Format #04 00:10:17.159 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:17.159 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:17.159 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:17.159 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:17.159 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:17.159 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:17.159 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:17.159 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:17.159 00:10:17.159 Get Feature FDP: 00:10:17.159 ================ 00:10:17.159 Enabled: Yes 00:10:17.159 FDP configuration index: 0 00:10:17.159 00:10:17.159 FDP configurations log page 00:10:17.159 =========================== 00:10:17.159 Number of FDP configurations: 1 00:10:17.159 Version: 0 00:10:17.159 Size: 112 00:10:17.159 FDP Configuration Descriptor: 0 00:10:17.159 Descriptor Size: 96 00:10:17.159 Reclaim Group Identifier format: 2 00:10:17.159 FDP Volatile Write Cache: Not Present 00:10:17.159 FDP Configuration: Valid 00:10:17.159 Vendor Specific Size: 0 00:10:17.159 Number of Reclaim Groups: 2 00:10:17.159 Number of Recalim Unit Handles: 8 00:10:17.159 Max Placement Identifiers: 128 00:10:17.159 Number of Namespaces Suppprted: 256 00:10:17.159 Reclaim unit Nominal Size: 6000000 bytes 00:10:17.159 Estimated Reclaim Unit Time Limit: Not Reported 00:10:17.159 RUH Desc #000: RUH Type: Initially Isolated 00:10:17.159 RUH Desc #001: RUH Type: Initially Isolated 00:10:17.159 RUH Desc #002: RUH Type: Initially Isolated 00:10:17.159 RUH Desc #003: RUH Type: Initially Isolated 00:10:17.159 RUH Desc #004: RUH Type: Initially Isolated 00:10:17.159 RUH Desc #005: RUH Type: Initially Isolated 00:10:17.159 RUH Desc #006: RUH Type: Initially Isolated 00:10:17.159 RUH Desc #007: RUH Type: Initially Isolated 00:10:17.159 00:10:17.159 FDP reclaim unit handle usage log page 00:10:17.159 ====================================== 00:10:17.159 Number of Reclaim Unit Handles: 8 00:10:17.159 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:17.159 RUH Usage Desc #001: RUH Attributes: Unused 00:10:17.159 RUH Usage Desc #002: RUH Attributes: Unused 00:10:17.159 RUH Usage Desc #003: RUH Attributes: Unused 00:10:17.159 RUH Usage Desc #004: RUH Attributes: Unused 00:10:17.159 RUH Usage Desc #005: RUH Attributes: Unused 00:10:17.159 RUH Usage Desc #006: RUH Attributes: Unused 00:10:17.159 RUH Usage Desc #007: RUH Attributes: Unused 00:10:17.159 00:10:17.159 FDP statistics log page 00:10:17.159 ======================= 00:10:17.159 Host bytes with metadata written: 584572928 00:10:17.159 Med[2024-11-04 16:04:35.640763] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64212 terminated unexpected 00:10:17.159 ia bytes with metadata written: 584650752 00:10:17.159 Media bytes erased: 0 00:10:17.159 00:10:17.159 FDP events log page 00:10:17.159 =================== 00:10:17.159 Number of FDP events: 0 00:10:17.159 00:10:17.159 NVM Specific Namespace Data 00:10:17.159 =========================== 00:10:17.159 Logical Block Storage Tag Mask: 0 00:10:17.159 Protection Information Capabilities: 00:10:17.159 16b Guard Protection Information Storage Tag Support: No 00:10:17.159 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:17.159 Storage Tag Check Read Support: No 00:10:17.159 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.159 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.159 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.159 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.159 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.159 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.159 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.159 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.159 ===================================================== 00:10:17.159 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:17.159 ===================================================== 00:10:17.159 Controller Capabilities/Features 00:10:17.159 ================================ 00:10:17.159 Vendor ID: 1b36 00:10:17.159 Subsystem Vendor ID: 1af4 00:10:17.159 Serial Number: 12342 00:10:17.159 Model Number: QEMU NVMe Ctrl 00:10:17.159 Firmware Version: 8.0.0 00:10:17.159 Recommended Arb Burst: 6 00:10:17.159 IEEE OUI Identifier: 00 54 52 00:10:17.159 Multi-path I/O 00:10:17.159 May have multiple subsystem ports: No 00:10:17.159 May have multiple controllers: No 00:10:17.159 Associated with SR-IOV VF: No 00:10:17.159 Max Data Transfer Size: 524288 00:10:17.159 Max Number of Namespaces: 256 00:10:17.159 Max Number of I/O Queues: 64 00:10:17.159 NVMe Specification Version (VS): 1.4 00:10:17.159 NVMe Specification Version (Identify): 1.4 00:10:17.159 Maximum Queue Entries: 2048 00:10:17.159 Contiguous Queues Required: Yes 00:10:17.159 Arbitration Mechanisms Supported 00:10:17.159 Weighted Round Robin: Not Supported 00:10:17.159 Vendor Specific: Not Supported 00:10:17.159 Reset Timeout: 7500 ms 00:10:17.159 Doorbell Stride: 4 bytes 00:10:17.160 NVM Subsystem Reset: Not Supported 00:10:17.160 Command Sets Supported 00:10:17.160 NVM Command Set: Supported 00:10:17.160 Boot Partition: Not Supported 00:10:17.160 Memory Page Size Minimum: 4096 bytes 00:10:17.160 Memory Page Size Maximum: 65536 bytes 00:10:17.160 Persistent Memory Region: Not Supported 00:10:17.160 Optional Asynchronous Events Supported 00:10:17.160 Namespace Attribute Notices: Supported 00:10:17.160 Firmware Activation Notices: Not Supported 00:10:17.160 ANA Change Notices: Not Supported 00:10:17.160 PLE Aggregate Log Change Notices: Not Supported 00:10:17.160 LBA Status Info Alert Notices: Not Supported 00:10:17.160 EGE Aggregate Log Change Notices: Not Supported 00:10:17.160 Normal NVM Subsystem Shutdown event: Not Supported 00:10:17.160 Zone Descriptor Change Notices: Not Supported 00:10:17.160 Discovery Log Change Notices: Not Supported 00:10:17.160 Controller Attributes 00:10:17.160 128-bit Host Identifier: Not Supported 00:10:17.160 Non-Operational Permissive Mode: Not Supported 00:10:17.160 NVM Sets: Not Supported 00:10:17.160 Read Recovery Levels: Not Supported 00:10:17.160 Endurance Groups: Not Supported 00:10:17.160 Predictable Latency Mode: Not Supported 00:10:17.160 Traffic Based Keep ALive: Not Supported 00:10:17.160 Namespace Granularity: Not Supported 00:10:17.160 SQ Associations: Not Supported 00:10:17.160 UUID List: Not Supported 00:10:17.160 Multi-Domain Subsystem: Not Supported 00:10:17.160 Fixed Capacity Management: Not Supported 00:10:17.160 Variable Capacity Management: Not Supported 00:10:17.160 Delete Endurance Group: Not Supported 00:10:17.160 Delete NVM Set: Not Supported 00:10:17.160 Extended LBA Formats Supported: Supported 00:10:17.160 Flexible Data Placement Supported: Not Supported 00:10:17.160 00:10:17.160 Controller Memory Buffer Support 00:10:17.160 ================================ 00:10:17.160 Supported: No 00:10:17.160 00:10:17.160 Persistent Memory Region Support 00:10:17.160 ================================ 00:10:17.160 Supported: No 00:10:17.160 00:10:17.160 Admin Command Set Attributes 00:10:17.160 ============================ 00:10:17.160 Security Send/Receive: Not Supported 00:10:17.160 Format NVM: Supported 00:10:17.160 Firmware Activate/Download: Not Supported 00:10:17.160 Namespace Management: Supported 00:10:17.160 Device Self-Test: Not Supported 00:10:17.160 Directives: Supported 00:10:17.160 NVMe-MI: Not Supported 00:10:17.160 Virtualization Management: Not Supported 00:10:17.160 Doorbell Buffer Config: Supported 00:10:17.160 Get LBA Status Capability: Not Supported 00:10:17.160 Command & Feature Lockdown Capability: Not Supported 00:10:17.160 Abort Command Limit: 4 00:10:17.160 Async Event Request Limit: 4 00:10:17.160 Number of Firmware Slots: N/A 00:10:17.160 Firmware Slot 1 Read-Only: N/A 00:10:17.160 Firmware Activation Without Reset: N/A 00:10:17.160 Multiple Update Detection Support: N/A 00:10:17.160 Firmware Update Granularity: No Information Provided 00:10:17.160 Per-Namespace SMART Log: Yes 00:10:17.160 Asymmetric Namespace Access Log Page: Not Supported 00:10:17.160 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:17.160 Command Effects Log Page: Supported 00:10:17.160 Get Log Page Extended Data: Supported 00:10:17.160 Telemetry Log Pages: Not Supported 00:10:17.160 Persistent Event Log Pages: Not Supported 00:10:17.160 Supported Log Pages Log Page: May Support 00:10:17.160 Commands Supported & Effects Log Page: Not Supported 00:10:17.160 Feature Identifiers & Effects Log Page:May Support 00:10:17.160 NVMe-MI Commands & Effects Log Page: May Support 00:10:17.160 Data Area 4 for Telemetry Log: Not Supported 00:10:17.160 Error Log Page Entries Supported: 1 00:10:17.160 Keep Alive: Not Supported 00:10:17.160 00:10:17.160 NVM Command Set Attributes 00:10:17.160 ========================== 00:10:17.160 Submission Queue Entry Size 00:10:17.160 Max: 64 00:10:17.160 Min: 64 00:10:17.160 Completion Queue Entry Size 00:10:17.160 Max: 16 00:10:17.160 Min: 16 00:10:17.160 Number of Namespaces: 256 00:10:17.160 Compare Command: Supported 00:10:17.160 Write Uncorrectable Command: Not Supported 00:10:17.160 Dataset Management Command: Supported 00:10:17.160 Write Zeroes Command: Supported 00:10:17.160 Set Features Save Field: Supported 00:10:17.160 Reservations: Not Supported 00:10:17.160 Timestamp: Supported 00:10:17.160 Copy: Supported 00:10:17.160 Volatile Write Cache: Present 00:10:17.160 Atomic Write Unit (Normal): 1 00:10:17.160 Atomic Write Unit (PFail): 1 00:10:17.160 Atomic Compare & Write Unit: 1 00:10:17.160 Fused Compare & Write: Not Supported 00:10:17.160 Scatter-Gather List 00:10:17.160 SGL Command Set: Supported 00:10:17.160 SGL Keyed: Not Supported 00:10:17.160 SGL Bit Bucket Descriptor: Not Supported 00:10:17.160 SGL Metadata Pointer: Not Supported 00:10:17.160 Oversized SGL: Not Supported 00:10:17.160 SGL Metadata Address: Not Supported 00:10:17.160 SGL Offset: Not Supported 00:10:17.160 Transport SGL Data Block: Not Supported 00:10:17.160 Replay Protected Memory Block: Not Supported 00:10:17.160 00:10:17.160 Firmware Slot Information 00:10:17.160 ========================= 00:10:17.160 Active slot: 1 00:10:17.160 Slot 1 Firmware Revision: 1.0 00:10:17.160 00:10:17.160 00:10:17.160 Commands Supported and Effects 00:10:17.160 ============================== 00:10:17.160 Admin Commands 00:10:17.160 -------------- 00:10:17.160 Delete I/O Submission Queue (00h): Supported 00:10:17.160 Create I/O Submission Queue (01h): Supported 00:10:17.160 Get Log Page (02h): Supported 00:10:17.160 Delete I/O Completion Queue (04h): Supported 00:10:17.160 Create I/O Completion Queue (05h): Supported 00:10:17.160 Identify (06h): Supported 00:10:17.160 Abort (08h): Supported 00:10:17.160 Set Features (09h): Supported 00:10:17.160 Get Features (0Ah): Supported 00:10:17.160 Asynchronous Event Request (0Ch): Supported 00:10:17.160 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:17.160 Directive Send (19h): Supported 00:10:17.160 Directive Receive (1Ah): Supported 00:10:17.160 Virtualization Management (1Ch): Supported 00:10:17.160 Doorbell Buffer Config (7Ch): Supported 00:10:17.160 Format NVM (80h): Supported LBA-Change 00:10:17.160 I/O Commands 00:10:17.160 ------------ 00:10:17.160 Flush (00h): Supported LBA-Change 00:10:17.160 Write (01h): Supported LBA-Change 00:10:17.160 Read (02h): Supported 00:10:17.160 Compare (05h): Supported 00:10:17.160 Write Zeroes (08h): Supported LBA-Change 00:10:17.160 Dataset Management (09h): Supported LBA-Change 00:10:17.160 Unknown (0Ch): Supported 00:10:17.160 Unknown (12h): Supported 00:10:17.160 Copy (19h): Supported LBA-Change 00:10:17.160 Unknown (1Dh): Supported LBA-Change 00:10:17.160 00:10:17.160 Error Log 00:10:17.160 ========= 00:10:17.160 00:10:17.160 Arbitration 00:10:17.160 =========== 00:10:17.160 Arbitration Burst: no limit 00:10:17.160 00:10:17.160 Power Management 00:10:17.160 ================ 00:10:17.160 Number of Power States: 1 00:10:17.160 Current Power State: Power State #0 00:10:17.160 Power State #0: 00:10:17.160 Max Power: 25.00 W 00:10:17.160 Non-Operational State: Operational 00:10:17.161 Entry Latency: 16 microseconds 00:10:17.161 Exit Latency: 4 microseconds 00:10:17.161 Relative Read Throughput: 0 00:10:17.161 Relative Read Latency: 0 00:10:17.161 Relative Write Throughput: 0 00:10:17.161 Relative Write Latency: 0 00:10:17.161 Idle Power: Not Reported 00:10:17.161 Active Power: Not Reported 00:10:17.161 Non-Operational Permissive Mode: Not Supported 00:10:17.161 00:10:17.161 Health Information 00:10:17.161 ================== 00:10:17.161 Critical Warnings: 00:10:17.161 Available Spare Space: OK 00:10:17.161 Temperature: OK 00:10:17.161 Device Reliability: OK 00:10:17.161 Read Only: No 00:10:17.161 Volatile Memory Backup: OK 00:10:17.161 Current Temperature: 323 Kelvin (50 Celsius) 00:10:17.161 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:17.161 Available Spare: 0% 00:10:17.161 Available Spare Threshold: 0% 00:10:17.161 Life Percentage Used: 0% 00:10:17.161 Data Units Read: 2459 00:10:17.161 Data Units Written: 2247 00:10:17.161 Host Read Commands: 97094 00:10:17.161 Host Write Commands: 95364 00:10:17.161 Controller Busy Time: 0 minutes 00:10:17.161 Power Cycles: 0 00:10:17.161 Power On Hours: 0 hours 00:10:17.161 Unsafe Shutdowns: 0 00:10:17.161 Unrecoverable Media Errors: 0 00:10:17.161 Lifetime Error Log Entries: 0 00:10:17.161 Warning Temperature Time: 0 minutes 00:10:17.161 Critical Temperature Time: 0 minutes 00:10:17.161 00:10:17.161 Number of Queues 00:10:17.161 ================ 00:10:17.161 Number of I/O Submission Queues: 64 00:10:17.161 Number of I/O Completion Queues: 64 00:10:17.161 00:10:17.161 ZNS Specific Controller Data 00:10:17.161 ============================ 00:10:17.161 Zone Append Size Limit: 0 00:10:17.161 00:10:17.161 00:10:17.161 Active Namespaces 00:10:17.161 ================= 00:10:17.161 Namespace ID:1 00:10:17.161 Error Recovery Timeout: Unlimited 00:10:17.161 Command Set Identifier: NVM (00h) 00:10:17.161 Deallocate: Supported 00:10:17.161 Deallocated/Unwritten Error: Supported 00:10:17.161 Deallocated Read Value: All 0x00 00:10:17.161 Deallocate in Write Zeroes: Not Supported 00:10:17.161 Deallocated Guard Field: 0xFFFF 00:10:17.161 Flush: Supported 00:10:17.161 Reservation: Not Supported 00:10:17.161 Namespace Sharing Capabilities: Private 00:10:17.161 Size (in LBAs): 1048576 (4GiB) 00:10:17.161 Capacity (in LBAs): 1048576 (4GiB) 00:10:17.161 Utilization (in LBAs): 1048576 (4GiB) 00:10:17.161 Thin Provisioning: Not Supported 00:10:17.161 Per-NS Atomic Units: No 00:10:17.161 Maximum Single Source Range Length: 128 00:10:17.161 Maximum Copy Length: 128 00:10:17.161 Maximum Source Range Count: 128 00:10:17.161 NGUID/EUI64 Never Reused: No 00:10:17.161 Namespace Write Protected: No 00:10:17.161 Number of LBA Formats: 8 00:10:17.161 Current LBA Format: LBA Format #04 00:10:17.161 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:17.161 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:17.161 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:17.161 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:17.161 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:17.161 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:17.161 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:17.161 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:17.161 00:10:17.161 NVM Specific Namespace Data 00:10:17.161 =========================== 00:10:17.161 Logical Block Storage Tag Mask: 0 00:10:17.161 Protection Information Capabilities: 00:10:17.161 16b Guard Protection Information Storage Tag Support: No 00:10:17.161 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:17.161 Storage Tag Check Read Support: No 00:10:17.161 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Namespace ID:2 00:10:17.161 Error Recovery Timeout: Unlimited 00:10:17.161 Command Set Identifier: NVM (00h) 00:10:17.161 Deallocate: Supported 00:10:17.161 Deallocated/Unwritten Error: Supported 00:10:17.161 Deallocated Read Value: All 0x00 00:10:17.161 Deallocate in Write Zeroes: Not Supported 00:10:17.161 Deallocated Guard Field: 0xFFFF 00:10:17.161 Flush: Supported 00:10:17.161 Reservation: Not Supported 00:10:17.161 Namespace Sharing Capabilities: Private 00:10:17.161 Size (in LBAs): 1048576 (4GiB) 00:10:17.161 Capacity (in LBAs): 1048576 (4GiB) 00:10:17.161 Utilization (in LBAs): 1048576 (4GiB) 00:10:17.161 Thin Provisioning: Not Supported 00:10:17.161 Per-NS Atomic Units: No 00:10:17.161 Maximum Single Source Range Length: 128 00:10:17.161 Maximum Copy Length: 128 00:10:17.161 Maximum Source Range Count: 128 00:10:17.161 NGUID/EUI64 Never Reused: No 00:10:17.161 Namespace Write Protected: No 00:10:17.161 Number of LBA Formats: 8 00:10:17.161 Current LBA Format: LBA Format #04 00:10:17.161 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:17.161 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:17.161 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:17.161 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:17.161 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:17.161 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:17.161 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:17.161 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:17.161 00:10:17.161 NVM Specific Namespace Data 00:10:17.161 =========================== 00:10:17.161 Logical Block Storage Tag Mask: 0 00:10:17.161 Protection Information Capabilities: 00:10:17.161 16b Guard Protection Information Storage Tag Support: No 00:10:17.161 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:17.161 Storage Tag Check Read Support: No 00:10:17.161 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.161 Namespace ID:3 00:10:17.161 Error Recovery Timeout: Unlimited 00:10:17.161 Command Set Identifier: NVM (00h) 00:10:17.161 Deallocate: Supported 00:10:17.161 Deallocated/Unwritten Error: Supported 00:10:17.161 Deallocated Read Value: All 0x00 00:10:17.161 Deallocate in Write Zeroes: Not Supported 00:10:17.161 Deallocated Guard Field: 0xFFFF 00:10:17.161 Flush: Supported 00:10:17.161 Reservation: Not Supported 00:10:17.161 Namespace Sharing Capabilities: Private 00:10:17.161 Size (in LBAs): 1048576 (4GiB) 00:10:17.161 Capacity (in LBAs): 1048576 (4GiB) 00:10:17.161 Utilization (in LBAs): 1048576 (4GiB) 00:10:17.161 Thin Provisioning: Not Supported 00:10:17.161 Per-NS Atomic Units: No 00:10:17.161 Maximum Single Source Range Length: 128 00:10:17.161 Maximum Copy Length: 128 00:10:17.161 Maximum Source Range Count: 128 00:10:17.161 NGUID/EUI64 Never Reused: No 00:10:17.161 Namespace Write Protected: No 00:10:17.161 Number of LBA Formats: 8 00:10:17.161 Current LBA Format: LBA Format #04 00:10:17.161 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:17.161 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:17.161 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:17.161 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:17.161 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:17.161 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:17.161 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:17.161 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:17.161 00:10:17.161 NVM Specific Namespace Data 00:10:17.161 =========================== 00:10:17.161 Logical Block Storage Tag Mask: 0 00:10:17.161 Protection Information Capabilities: 00:10:17.161 16b Guard Protection Information Storage Tag Support: No 00:10:17.162 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:17.162 Storage Tag Check Read Support: No 00:10:17.162 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.162 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.162 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.162 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.162 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.162 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.162 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.162 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.162 16:04:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:17.162 16:04:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:17.421 ===================================================== 00:10:17.421 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:17.421 ===================================================== 00:10:17.421 Controller Capabilities/Features 00:10:17.421 ================================ 00:10:17.421 Vendor ID: 1b36 00:10:17.421 Subsystem Vendor ID: 1af4 00:10:17.421 Serial Number: 12340 00:10:17.421 Model Number: QEMU NVMe Ctrl 00:10:17.421 Firmware Version: 8.0.0 00:10:17.421 Recommended Arb Burst: 6 00:10:17.421 IEEE OUI Identifier: 00 54 52 00:10:17.421 Multi-path I/O 00:10:17.421 May have multiple subsystem ports: No 00:10:17.421 May have multiple controllers: No 00:10:17.421 Associated with SR-IOV VF: No 00:10:17.421 Max Data Transfer Size: 524288 00:10:17.421 Max Number of Namespaces: 256 00:10:17.421 Max Number of I/O Queues: 64 00:10:17.421 NVMe Specification Version (VS): 1.4 00:10:17.421 NVMe Specification Version (Identify): 1.4 00:10:17.421 Maximum Queue Entries: 2048 00:10:17.421 Contiguous Queues Required: Yes 00:10:17.421 Arbitration Mechanisms Supported 00:10:17.421 Weighted Round Robin: Not Supported 00:10:17.421 Vendor Specific: Not Supported 00:10:17.421 Reset Timeout: 7500 ms 00:10:17.421 Doorbell Stride: 4 bytes 00:10:17.421 NVM Subsystem Reset: Not Supported 00:10:17.421 Command Sets Supported 00:10:17.421 NVM Command Set: Supported 00:10:17.421 Boot Partition: Not Supported 00:10:17.421 Memory Page Size Minimum: 4096 bytes 00:10:17.421 Memory Page Size Maximum: 65536 bytes 00:10:17.421 Persistent Memory Region: Not Supported 00:10:17.421 Optional Asynchronous Events Supported 00:10:17.421 Namespace Attribute Notices: Supported 00:10:17.421 Firmware Activation Notices: Not Supported 00:10:17.421 ANA Change Notices: Not Supported 00:10:17.421 PLE Aggregate Log Change Notices: Not Supported 00:10:17.421 LBA Status Info Alert Notices: Not Supported 00:10:17.421 EGE Aggregate Log Change Notices: Not Supported 00:10:17.421 Normal NVM Subsystem Shutdown event: Not Supported 00:10:17.421 Zone Descriptor Change Notices: Not Supported 00:10:17.421 Discovery Log Change Notices: Not Supported 00:10:17.421 Controller Attributes 00:10:17.421 128-bit Host Identifier: Not Supported 00:10:17.421 Non-Operational Permissive Mode: Not Supported 00:10:17.421 NVM Sets: Not Supported 00:10:17.421 Read Recovery Levels: Not Supported 00:10:17.421 Endurance Groups: Not Supported 00:10:17.421 Predictable Latency Mode: Not Supported 00:10:17.421 Traffic Based Keep ALive: Not Supported 00:10:17.422 Namespace Granularity: Not Supported 00:10:17.422 SQ Associations: Not Supported 00:10:17.422 UUID List: Not Supported 00:10:17.422 Multi-Domain Subsystem: Not Supported 00:10:17.422 Fixed Capacity Management: Not Supported 00:10:17.422 Variable Capacity Management: Not Supported 00:10:17.422 Delete Endurance Group: Not Supported 00:10:17.422 Delete NVM Set: Not Supported 00:10:17.422 Extended LBA Formats Supported: Supported 00:10:17.422 Flexible Data Placement Supported: Not Supported 00:10:17.422 00:10:17.422 Controller Memory Buffer Support 00:10:17.422 ================================ 00:10:17.422 Supported: No 00:10:17.422 00:10:17.422 Persistent Memory Region Support 00:10:17.422 ================================ 00:10:17.422 Supported: No 00:10:17.422 00:10:17.422 Admin Command Set Attributes 00:10:17.422 ============================ 00:10:17.422 Security Send/Receive: Not Supported 00:10:17.422 Format NVM: Supported 00:10:17.422 Firmware Activate/Download: Not Supported 00:10:17.422 Namespace Management: Supported 00:10:17.422 Device Self-Test: Not Supported 00:10:17.422 Directives: Supported 00:10:17.422 NVMe-MI: Not Supported 00:10:17.422 Virtualization Management: Not Supported 00:10:17.422 Doorbell Buffer Config: Supported 00:10:17.422 Get LBA Status Capability: Not Supported 00:10:17.422 Command & Feature Lockdown Capability: Not Supported 00:10:17.422 Abort Command Limit: 4 00:10:17.422 Async Event Request Limit: 4 00:10:17.422 Number of Firmware Slots: N/A 00:10:17.422 Firmware Slot 1 Read-Only: N/A 00:10:17.422 Firmware Activation Without Reset: N/A 00:10:17.422 Multiple Update Detection Support: N/A 00:10:17.422 Firmware Update Granularity: No Information Provided 00:10:17.422 Per-Namespace SMART Log: Yes 00:10:17.422 Asymmetric Namespace Access Log Page: Not Supported 00:10:17.422 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:17.422 Command Effects Log Page: Supported 00:10:17.422 Get Log Page Extended Data: Supported 00:10:17.422 Telemetry Log Pages: Not Supported 00:10:17.422 Persistent Event Log Pages: Not Supported 00:10:17.422 Supported Log Pages Log Page: May Support 00:10:17.422 Commands Supported & Effects Log Page: Not Supported 00:10:17.422 Feature Identifiers & Effects Log Page:May Support 00:10:17.422 NVMe-MI Commands & Effects Log Page: May Support 00:10:17.422 Data Area 4 for Telemetry Log: Not Supported 00:10:17.422 Error Log Page Entries Supported: 1 00:10:17.422 Keep Alive: Not Supported 00:10:17.422 00:10:17.422 NVM Command Set Attributes 00:10:17.422 ========================== 00:10:17.422 Submission Queue Entry Size 00:10:17.422 Max: 64 00:10:17.422 Min: 64 00:10:17.422 Completion Queue Entry Size 00:10:17.422 Max: 16 00:10:17.422 Min: 16 00:10:17.422 Number of Namespaces: 256 00:10:17.422 Compare Command: Supported 00:10:17.422 Write Uncorrectable Command: Not Supported 00:10:17.422 Dataset Management Command: Supported 00:10:17.422 Write Zeroes Command: Supported 00:10:17.422 Set Features Save Field: Supported 00:10:17.422 Reservations: Not Supported 00:10:17.422 Timestamp: Supported 00:10:17.422 Copy: Supported 00:10:17.422 Volatile Write Cache: Present 00:10:17.422 Atomic Write Unit (Normal): 1 00:10:17.422 Atomic Write Unit (PFail): 1 00:10:17.422 Atomic Compare & Write Unit: 1 00:10:17.422 Fused Compare & Write: Not Supported 00:10:17.422 Scatter-Gather List 00:10:17.422 SGL Command Set: Supported 00:10:17.422 SGL Keyed: Not Supported 00:10:17.422 SGL Bit Bucket Descriptor: Not Supported 00:10:17.422 SGL Metadata Pointer: Not Supported 00:10:17.422 Oversized SGL: Not Supported 00:10:17.422 SGL Metadata Address: Not Supported 00:10:17.422 SGL Offset: Not Supported 00:10:17.422 Transport SGL Data Block: Not Supported 00:10:17.422 Replay Protected Memory Block: Not Supported 00:10:17.422 00:10:17.422 Firmware Slot Information 00:10:17.422 ========================= 00:10:17.422 Active slot: 1 00:10:17.422 Slot 1 Firmware Revision: 1.0 00:10:17.422 00:10:17.422 00:10:17.422 Commands Supported and Effects 00:10:17.422 ============================== 00:10:17.422 Admin Commands 00:10:17.422 -------------- 00:10:17.422 Delete I/O Submission Queue (00h): Supported 00:10:17.422 Create I/O Submission Queue (01h): Supported 00:10:17.422 Get Log Page (02h): Supported 00:10:17.422 Delete I/O Completion Queue (04h): Supported 00:10:17.422 Create I/O Completion Queue (05h): Supported 00:10:17.422 Identify (06h): Supported 00:10:17.422 Abort (08h): Supported 00:10:17.422 Set Features (09h): Supported 00:10:17.422 Get Features (0Ah): Supported 00:10:17.422 Asynchronous Event Request (0Ch): Supported 00:10:17.422 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:17.422 Directive Send (19h): Supported 00:10:17.422 Directive Receive (1Ah): Supported 00:10:17.422 Virtualization Management (1Ch): Supported 00:10:17.422 Doorbell Buffer Config (7Ch): Supported 00:10:17.422 Format NVM (80h): Supported LBA-Change 00:10:17.422 I/O Commands 00:10:17.422 ------------ 00:10:17.422 Flush (00h): Supported LBA-Change 00:10:17.422 Write (01h): Supported LBA-Change 00:10:17.422 Read (02h): Supported 00:10:17.422 Compare (05h): Supported 00:10:17.422 Write Zeroes (08h): Supported LBA-Change 00:10:17.422 Dataset Management (09h): Supported LBA-Change 00:10:17.422 Unknown (0Ch): Supported 00:10:17.422 Unknown (12h): Supported 00:10:17.422 Copy (19h): Supported LBA-Change 00:10:17.422 Unknown (1Dh): Supported LBA-Change 00:10:17.422 00:10:17.422 Error Log 00:10:17.422 ========= 00:10:17.422 00:10:17.422 Arbitration 00:10:17.422 =========== 00:10:17.422 Arbitration Burst: no limit 00:10:17.422 00:10:17.422 Power Management 00:10:17.422 ================ 00:10:17.422 Number of Power States: 1 00:10:17.422 Current Power State: Power State #0 00:10:17.422 Power State #0: 00:10:17.422 Max Power: 25.00 W 00:10:17.422 Non-Operational State: Operational 00:10:17.422 Entry Latency: 16 microseconds 00:10:17.422 Exit Latency: 4 microseconds 00:10:17.422 Relative Read Throughput: 0 00:10:17.422 Relative Read Latency: 0 00:10:17.422 Relative Write Throughput: 0 00:10:17.422 Relative Write Latency: 0 00:10:17.422 Idle Power: Not Reported 00:10:17.422 Active Power: Not Reported 00:10:17.422 Non-Operational Permissive Mode: Not Supported 00:10:17.422 00:10:17.422 Health Information 00:10:17.422 ================== 00:10:17.422 Critical Warnings: 00:10:17.422 Available Spare Space: OK 00:10:17.422 Temperature: OK 00:10:17.422 Device Reliability: OK 00:10:17.422 Read Only: No 00:10:17.422 Volatile Memory Backup: OK 00:10:17.422 Current Temperature: 323 Kelvin (50 Celsius) 00:10:17.422 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:17.422 Available Spare: 0% 00:10:17.422 Available Spare Threshold: 0% 00:10:17.422 Life Percentage Used: 0% 00:10:17.422 Data Units Read: 720 00:10:17.422 Data Units Written: 648 00:10:17.422 Host Read Commands: 31384 00:10:17.422 Host Write Commands: 31170 00:10:17.422 Controller Busy Time: 0 minutes 00:10:17.422 Power Cycles: 0 00:10:17.422 Power On Hours: 0 hours 00:10:17.422 Unsafe Shutdowns: 0 00:10:17.422 Unrecoverable Media Errors: 0 00:10:17.422 Lifetime Error Log Entries: 0 00:10:17.422 Warning Temperature Time: 0 minutes 00:10:17.422 Critical Temperature Time: 0 minutes 00:10:17.422 00:10:17.422 Number of Queues 00:10:17.422 ================ 00:10:17.422 Number of I/O Submission Queues: 64 00:10:17.422 Number of I/O Completion Queues: 64 00:10:17.422 00:10:17.422 ZNS Specific Controller Data 00:10:17.422 ============================ 00:10:17.422 Zone Append Size Limit: 0 00:10:17.422 00:10:17.422 00:10:17.422 Active Namespaces 00:10:17.422 ================= 00:10:17.422 Namespace ID:1 00:10:17.422 Error Recovery Timeout: Unlimited 00:10:17.422 Command Set Identifier: NVM (00h) 00:10:17.422 Deallocate: Supported 00:10:17.422 Deallocated/Unwritten Error: Supported 00:10:17.422 Deallocated Read Value: All 0x00 00:10:17.422 Deallocate in Write Zeroes: Not Supported 00:10:17.422 Deallocated Guard Field: 0xFFFF 00:10:17.422 Flush: Supported 00:10:17.422 Reservation: Not Supported 00:10:17.422 Metadata Transferred as: Separate Metadata Buffer 00:10:17.422 Namespace Sharing Capabilities: Private 00:10:17.422 Size (in LBAs): 1548666 (5GiB) 00:10:17.422 Capacity (in LBAs): 1548666 (5GiB) 00:10:17.422 Utilization (in LBAs): 1548666 (5GiB) 00:10:17.422 Thin Provisioning: Not Supported 00:10:17.422 Per-NS Atomic Units: No 00:10:17.423 Maximum Single Source Range Length: 128 00:10:17.423 Maximum Copy Length: 128 00:10:17.423 Maximum Source Range Count: 128 00:10:17.423 NGUID/EUI64 Never Reused: No 00:10:17.423 Namespace Write Protected: No 00:10:17.423 Number of LBA Formats: 8 00:10:17.423 Current LBA Format: LBA Format #07 00:10:17.423 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:17.423 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:17.423 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:17.423 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:17.423 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:17.423 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:17.423 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:17.423 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:17.423 00:10:17.423 NVM Specific Namespace Data 00:10:17.423 =========================== 00:10:17.423 Logical Block Storage Tag Mask: 0 00:10:17.423 Protection Information Capabilities: 00:10:17.423 16b Guard Protection Information Storage Tag Support: No 00:10:17.423 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:17.423 Storage Tag Check Read Support: No 00:10:17.423 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.423 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.423 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.423 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.423 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.423 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.423 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.423 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.423 16:04:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:17.423 16:04:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:17.682 ===================================================== 00:10:17.682 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:17.682 ===================================================== 00:10:17.682 Controller Capabilities/Features 00:10:17.682 ================================ 00:10:17.682 Vendor ID: 1b36 00:10:17.682 Subsystem Vendor ID: 1af4 00:10:17.682 Serial Number: 12341 00:10:17.682 Model Number: QEMU NVMe Ctrl 00:10:17.682 Firmware Version: 8.0.0 00:10:17.682 Recommended Arb Burst: 6 00:10:17.682 IEEE OUI Identifier: 00 54 52 00:10:17.682 Multi-path I/O 00:10:17.682 May have multiple subsystem ports: No 00:10:17.682 May have multiple controllers: No 00:10:17.682 Associated with SR-IOV VF: No 00:10:17.682 Max Data Transfer Size: 524288 00:10:17.683 Max Number of Namespaces: 256 00:10:17.683 Max Number of I/O Queues: 64 00:10:17.683 NVMe Specification Version (VS): 1.4 00:10:17.683 NVMe Specification Version (Identify): 1.4 00:10:17.683 Maximum Queue Entries: 2048 00:10:17.683 Contiguous Queues Required: Yes 00:10:17.683 Arbitration Mechanisms Supported 00:10:17.683 Weighted Round Robin: Not Supported 00:10:17.683 Vendor Specific: Not Supported 00:10:17.683 Reset Timeout: 7500 ms 00:10:17.683 Doorbell Stride: 4 bytes 00:10:17.683 NVM Subsystem Reset: Not Supported 00:10:17.683 Command Sets Supported 00:10:17.683 NVM Command Set: Supported 00:10:17.683 Boot Partition: Not Supported 00:10:17.683 Memory Page Size Minimum: 4096 bytes 00:10:17.683 Memory Page Size Maximum: 65536 bytes 00:10:17.683 Persistent Memory Region: Not Supported 00:10:17.683 Optional Asynchronous Events Supported 00:10:17.683 Namespace Attribute Notices: Supported 00:10:17.683 Firmware Activation Notices: Not Supported 00:10:17.683 ANA Change Notices: Not Supported 00:10:17.683 PLE Aggregate Log Change Notices: Not Supported 00:10:17.683 LBA Status Info Alert Notices: Not Supported 00:10:17.683 EGE Aggregate Log Change Notices: Not Supported 00:10:17.683 Normal NVM Subsystem Shutdown event: Not Supported 00:10:17.683 Zone Descriptor Change Notices: Not Supported 00:10:17.683 Discovery Log Change Notices: Not Supported 00:10:17.683 Controller Attributes 00:10:17.683 128-bit Host Identifier: Not Supported 00:10:17.683 Non-Operational Permissive Mode: Not Supported 00:10:17.683 NVM Sets: Not Supported 00:10:17.683 Read Recovery Levels: Not Supported 00:10:17.683 Endurance Groups: Not Supported 00:10:17.683 Predictable Latency Mode: Not Supported 00:10:17.683 Traffic Based Keep ALive: Not Supported 00:10:17.683 Namespace Granularity: Not Supported 00:10:17.683 SQ Associations: Not Supported 00:10:17.683 UUID List: Not Supported 00:10:17.683 Multi-Domain Subsystem: Not Supported 00:10:17.683 Fixed Capacity Management: Not Supported 00:10:17.683 Variable Capacity Management: Not Supported 00:10:17.683 Delete Endurance Group: Not Supported 00:10:17.683 Delete NVM Set: Not Supported 00:10:17.683 Extended LBA Formats Supported: Supported 00:10:17.683 Flexible Data Placement Supported: Not Supported 00:10:17.683 00:10:17.683 Controller Memory Buffer Support 00:10:17.683 ================================ 00:10:17.683 Supported: No 00:10:17.683 00:10:17.683 Persistent Memory Region Support 00:10:17.683 ================================ 00:10:17.683 Supported: No 00:10:17.683 00:10:17.683 Admin Command Set Attributes 00:10:17.683 ============================ 00:10:17.683 Security Send/Receive: Not Supported 00:10:17.683 Format NVM: Supported 00:10:17.683 Firmware Activate/Download: Not Supported 00:10:17.683 Namespace Management: Supported 00:10:17.683 Device Self-Test: Not Supported 00:10:17.683 Directives: Supported 00:10:17.683 NVMe-MI: Not Supported 00:10:17.683 Virtualization Management: Not Supported 00:10:17.683 Doorbell Buffer Config: Supported 00:10:17.683 Get LBA Status Capability: Not Supported 00:10:17.683 Command & Feature Lockdown Capability: Not Supported 00:10:17.683 Abort Command Limit: 4 00:10:17.683 Async Event Request Limit: 4 00:10:17.683 Number of Firmware Slots: N/A 00:10:17.683 Firmware Slot 1 Read-Only: N/A 00:10:17.683 Firmware Activation Without Reset: N/A 00:10:17.683 Multiple Update Detection Support: N/A 00:10:17.683 Firmware Update Granularity: No Information Provided 00:10:17.683 Per-Namespace SMART Log: Yes 00:10:17.683 Asymmetric Namespace Access Log Page: Not Supported 00:10:17.683 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:17.683 Command Effects Log Page: Supported 00:10:17.683 Get Log Page Extended Data: Supported 00:10:17.683 Telemetry Log Pages: Not Supported 00:10:17.683 Persistent Event Log Pages: Not Supported 00:10:17.683 Supported Log Pages Log Page: May Support 00:10:17.683 Commands Supported & Effects Log Page: Not Supported 00:10:17.683 Feature Identifiers & Effects Log Page:May Support 00:10:17.683 NVMe-MI Commands & Effects Log Page: May Support 00:10:17.683 Data Area 4 for Telemetry Log: Not Supported 00:10:17.683 Error Log Page Entries Supported: 1 00:10:17.683 Keep Alive: Not Supported 00:10:17.683 00:10:17.683 NVM Command Set Attributes 00:10:17.683 ========================== 00:10:17.683 Submission Queue Entry Size 00:10:17.683 Max: 64 00:10:17.683 Min: 64 00:10:17.683 Completion Queue Entry Size 00:10:17.683 Max: 16 00:10:17.683 Min: 16 00:10:17.683 Number of Namespaces: 256 00:10:17.683 Compare Command: Supported 00:10:17.683 Write Uncorrectable Command: Not Supported 00:10:17.683 Dataset Management Command: Supported 00:10:17.683 Write Zeroes Command: Supported 00:10:17.683 Set Features Save Field: Supported 00:10:17.683 Reservations: Not Supported 00:10:17.683 Timestamp: Supported 00:10:17.683 Copy: Supported 00:10:17.683 Volatile Write Cache: Present 00:10:17.683 Atomic Write Unit (Normal): 1 00:10:17.683 Atomic Write Unit (PFail): 1 00:10:17.683 Atomic Compare & Write Unit: 1 00:10:17.683 Fused Compare & Write: Not Supported 00:10:17.683 Scatter-Gather List 00:10:17.683 SGL Command Set: Supported 00:10:17.683 SGL Keyed: Not Supported 00:10:17.683 SGL Bit Bucket Descriptor: Not Supported 00:10:17.683 SGL Metadata Pointer: Not Supported 00:10:17.683 Oversized SGL: Not Supported 00:10:17.683 SGL Metadata Address: Not Supported 00:10:17.683 SGL Offset: Not Supported 00:10:17.683 Transport SGL Data Block: Not Supported 00:10:17.683 Replay Protected Memory Block: Not Supported 00:10:17.683 00:10:17.683 Firmware Slot Information 00:10:17.683 ========================= 00:10:17.683 Active slot: 1 00:10:17.683 Slot 1 Firmware Revision: 1.0 00:10:17.683 00:10:17.683 00:10:17.683 Commands Supported and Effects 00:10:17.683 ============================== 00:10:17.683 Admin Commands 00:10:17.683 -------------- 00:10:17.683 Delete I/O Submission Queue (00h): Supported 00:10:17.683 Create I/O Submission Queue (01h): Supported 00:10:17.683 Get Log Page (02h): Supported 00:10:17.683 Delete I/O Completion Queue (04h): Supported 00:10:17.683 Create I/O Completion Queue (05h): Supported 00:10:17.683 Identify (06h): Supported 00:10:17.683 Abort (08h): Supported 00:10:17.683 Set Features (09h): Supported 00:10:17.683 Get Features (0Ah): Supported 00:10:17.683 Asynchronous Event Request (0Ch): Supported 00:10:17.683 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:17.683 Directive Send (19h): Supported 00:10:17.683 Directive Receive (1Ah): Supported 00:10:17.683 Virtualization Management (1Ch): Supported 00:10:17.683 Doorbell Buffer Config (7Ch): Supported 00:10:17.683 Format NVM (80h): Supported LBA-Change 00:10:17.683 I/O Commands 00:10:17.683 ------------ 00:10:17.683 Flush (00h): Supported LBA-Change 00:10:17.683 Write (01h): Supported LBA-Change 00:10:17.683 Read (02h): Supported 00:10:17.683 Compare (05h): Supported 00:10:17.683 Write Zeroes (08h): Supported LBA-Change 00:10:17.683 Dataset Management (09h): Supported LBA-Change 00:10:17.683 Unknown (0Ch): Supported 00:10:17.683 Unknown (12h): Supported 00:10:17.683 Copy (19h): Supported LBA-Change 00:10:17.683 Unknown (1Dh): Supported LBA-Change 00:10:17.683 00:10:17.683 Error Log 00:10:17.683 ========= 00:10:17.683 00:10:17.683 Arbitration 00:10:17.683 =========== 00:10:17.683 Arbitration Burst: no limit 00:10:17.683 00:10:17.683 Power Management 00:10:17.683 ================ 00:10:17.683 Number of Power States: 1 00:10:17.683 Current Power State: Power State #0 00:10:17.683 Power State #0: 00:10:17.683 Max Power: 25.00 W 00:10:17.683 Non-Operational State: Operational 00:10:17.683 Entry Latency: 16 microseconds 00:10:17.683 Exit Latency: 4 microseconds 00:10:17.683 Relative Read Throughput: 0 00:10:17.683 Relative Read Latency: 0 00:10:17.683 Relative Write Throughput: 0 00:10:17.684 Relative Write Latency: 0 00:10:17.684 Idle Power: Not Reported 00:10:17.684 Active Power: Not Reported 00:10:17.684 Non-Operational Permissive Mode: Not Supported 00:10:17.684 00:10:17.684 Health Information 00:10:17.684 ================== 00:10:17.684 Critical Warnings: 00:10:17.684 Available Spare Space: OK 00:10:17.684 Temperature: OK 00:10:17.684 Device Reliability: OK 00:10:17.684 Read Only: No 00:10:17.684 Volatile Memory Backup: OK 00:10:17.684 Current Temperature: 323 Kelvin (50 Celsius) 00:10:17.684 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:17.684 Available Spare: 0% 00:10:17.684 Available Spare Threshold: 0% 00:10:17.684 Life Percentage Used: 0% 00:10:17.684 Data Units Read: 1109 00:10:17.684 Data Units Written: 983 00:10:17.684 Host Read Commands: 46645 00:10:17.684 Host Write Commands: 45548 00:10:17.684 Controller Busy Time: 0 minutes 00:10:17.684 Power Cycles: 0 00:10:17.684 Power On Hours: 0 hours 00:10:17.684 Unsafe Shutdowns: 0 00:10:17.684 Unrecoverable Media Errors: 0 00:10:17.684 Lifetime Error Log Entries: 0 00:10:17.684 Warning Temperature Time: 0 minutes 00:10:17.684 Critical Temperature Time: 0 minutes 00:10:17.684 00:10:17.684 Number of Queues 00:10:17.684 ================ 00:10:17.684 Number of I/O Submission Queues: 64 00:10:17.684 Number of I/O Completion Queues: 64 00:10:17.684 00:10:17.684 ZNS Specific Controller Data 00:10:17.684 ============================ 00:10:17.684 Zone Append Size Limit: 0 00:10:17.684 00:10:17.684 00:10:17.684 Active Namespaces 00:10:17.684 ================= 00:10:17.684 Namespace ID:1 00:10:17.684 Error Recovery Timeout: Unlimited 00:10:17.684 Command Set Identifier: NVM (00h) 00:10:17.684 Deallocate: Supported 00:10:17.684 Deallocated/Unwritten Error: Supported 00:10:17.684 Deallocated Read Value: All 0x00 00:10:17.684 Deallocate in Write Zeroes: Not Supported 00:10:17.684 Deallocated Guard Field: 0xFFFF 00:10:17.684 Flush: Supported 00:10:17.684 Reservation: Not Supported 00:10:17.684 Namespace Sharing Capabilities: Private 00:10:17.684 Size (in LBAs): 1310720 (5GiB) 00:10:17.684 Capacity (in LBAs): 1310720 (5GiB) 00:10:17.684 Utilization (in LBAs): 1310720 (5GiB) 00:10:17.684 Thin Provisioning: Not Supported 00:10:17.684 Per-NS Atomic Units: No 00:10:17.684 Maximum Single Source Range Length: 128 00:10:17.684 Maximum Copy Length: 128 00:10:17.684 Maximum Source Range Count: 128 00:10:17.684 NGUID/EUI64 Never Reused: No 00:10:17.684 Namespace Write Protected: No 00:10:17.684 Number of LBA Formats: 8 00:10:17.684 Current LBA Format: LBA Format #04 00:10:17.684 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:17.684 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:17.684 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:17.684 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:17.684 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:17.684 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:17.684 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:17.684 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:17.684 00:10:17.684 NVM Specific Namespace Data 00:10:17.684 =========================== 00:10:17.684 Logical Block Storage Tag Mask: 0 00:10:17.684 Protection Information Capabilities: 00:10:17.684 16b Guard Protection Information Storage Tag Support: No 00:10:17.684 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:17.684 Storage Tag Check Read Support: No 00:10:17.684 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.684 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.684 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.684 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.684 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.684 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.684 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.684 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.684 16:04:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:17.684 16:04:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:17.944 ===================================================== 00:10:17.944 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:17.944 ===================================================== 00:10:17.944 Controller Capabilities/Features 00:10:17.944 ================================ 00:10:17.944 Vendor ID: 1b36 00:10:17.944 Subsystem Vendor ID: 1af4 00:10:17.944 Serial Number: 12342 00:10:17.944 Model Number: QEMU NVMe Ctrl 00:10:17.944 Firmware Version: 8.0.0 00:10:17.944 Recommended Arb Burst: 6 00:10:17.944 IEEE OUI Identifier: 00 54 52 00:10:17.944 Multi-path I/O 00:10:17.944 May have multiple subsystem ports: No 00:10:17.944 May have multiple controllers: No 00:10:17.944 Associated with SR-IOV VF: No 00:10:17.944 Max Data Transfer Size: 524288 00:10:17.944 Max Number of Namespaces: 256 00:10:17.944 Max Number of I/O Queues: 64 00:10:17.944 NVMe Specification Version (VS): 1.4 00:10:17.944 NVMe Specification Version (Identify): 1.4 00:10:17.944 Maximum Queue Entries: 2048 00:10:17.944 Contiguous Queues Required: Yes 00:10:17.944 Arbitration Mechanisms Supported 00:10:17.944 Weighted Round Robin: Not Supported 00:10:17.944 Vendor Specific: Not Supported 00:10:17.944 Reset Timeout: 7500 ms 00:10:17.944 Doorbell Stride: 4 bytes 00:10:17.944 NVM Subsystem Reset: Not Supported 00:10:17.944 Command Sets Supported 00:10:17.944 NVM Command Set: Supported 00:10:17.944 Boot Partition: Not Supported 00:10:17.944 Memory Page Size Minimum: 4096 bytes 00:10:17.944 Memory Page Size Maximum: 65536 bytes 00:10:17.944 Persistent Memory Region: Not Supported 00:10:17.944 Optional Asynchronous Events Supported 00:10:17.944 Namespace Attribute Notices: Supported 00:10:17.944 Firmware Activation Notices: Not Supported 00:10:17.944 ANA Change Notices: Not Supported 00:10:17.944 PLE Aggregate Log Change Notices: Not Supported 00:10:17.944 LBA Status Info Alert Notices: Not Supported 00:10:17.944 EGE Aggregate Log Change Notices: Not Supported 00:10:17.944 Normal NVM Subsystem Shutdown event: Not Supported 00:10:17.944 Zone Descriptor Change Notices: Not Supported 00:10:17.944 Discovery Log Change Notices: Not Supported 00:10:17.944 Controller Attributes 00:10:17.944 128-bit Host Identifier: Not Supported 00:10:17.944 Non-Operational Permissive Mode: Not Supported 00:10:17.944 NVM Sets: Not Supported 00:10:17.944 Read Recovery Levels: Not Supported 00:10:17.944 Endurance Groups: Not Supported 00:10:17.944 Predictable Latency Mode: Not Supported 00:10:17.944 Traffic Based Keep ALive: Not Supported 00:10:17.944 Namespace Granularity: Not Supported 00:10:17.944 SQ Associations: Not Supported 00:10:17.944 UUID List: Not Supported 00:10:17.944 Multi-Domain Subsystem: Not Supported 00:10:17.944 Fixed Capacity Management: Not Supported 00:10:17.944 Variable Capacity Management: Not Supported 00:10:17.944 Delete Endurance Group: Not Supported 00:10:17.944 Delete NVM Set: Not Supported 00:10:17.944 Extended LBA Formats Supported: Supported 00:10:17.944 Flexible Data Placement Supported: Not Supported 00:10:17.944 00:10:17.944 Controller Memory Buffer Support 00:10:17.944 ================================ 00:10:17.944 Supported: No 00:10:17.944 00:10:17.944 Persistent Memory Region Support 00:10:17.944 ================================ 00:10:17.944 Supported: No 00:10:17.944 00:10:17.944 Admin Command Set Attributes 00:10:17.944 ============================ 00:10:17.944 Security Send/Receive: Not Supported 00:10:17.944 Format NVM: Supported 00:10:17.944 Firmware Activate/Download: Not Supported 00:10:17.944 Namespace Management: Supported 00:10:17.944 Device Self-Test: Not Supported 00:10:17.944 Directives: Supported 00:10:17.944 NVMe-MI: Not Supported 00:10:17.944 Virtualization Management: Not Supported 00:10:17.944 Doorbell Buffer Config: Supported 00:10:17.944 Get LBA Status Capability: Not Supported 00:10:17.944 Command & Feature Lockdown Capability: Not Supported 00:10:17.944 Abort Command Limit: 4 00:10:17.944 Async Event Request Limit: 4 00:10:17.944 Number of Firmware Slots: N/A 00:10:17.944 Firmware Slot 1 Read-Only: N/A 00:10:17.944 Firmware Activation Without Reset: N/A 00:10:17.944 Multiple Update Detection Support: N/A 00:10:17.944 Firmware Update Granularity: No Information Provided 00:10:17.944 Per-Namespace SMART Log: Yes 00:10:17.944 Asymmetric Namespace Access Log Page: Not Supported 00:10:17.944 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:17.944 Command Effects Log Page: Supported 00:10:17.944 Get Log Page Extended Data: Supported 00:10:17.944 Telemetry Log Pages: Not Supported 00:10:17.944 Persistent Event Log Pages: Not Supported 00:10:17.944 Supported Log Pages Log Page: May Support 00:10:17.944 Commands Supported & Effects Log Page: Not Supported 00:10:17.944 Feature Identifiers & Effects Log Page:May Support 00:10:17.944 NVMe-MI Commands & Effects Log Page: May Support 00:10:17.944 Data Area 4 for Telemetry Log: Not Supported 00:10:17.944 Error Log Page Entries Supported: 1 00:10:17.944 Keep Alive: Not Supported 00:10:17.944 00:10:17.944 NVM Command Set Attributes 00:10:17.944 ========================== 00:10:17.944 Submission Queue Entry Size 00:10:17.944 Max: 64 00:10:17.944 Min: 64 00:10:17.944 Completion Queue Entry Size 00:10:17.944 Max: 16 00:10:17.944 Min: 16 00:10:17.944 Number of Namespaces: 256 00:10:17.944 Compare Command: Supported 00:10:17.944 Write Uncorrectable Command: Not Supported 00:10:17.944 Dataset Management Command: Supported 00:10:17.945 Write Zeroes Command: Supported 00:10:17.945 Set Features Save Field: Supported 00:10:17.945 Reservations: Not Supported 00:10:17.945 Timestamp: Supported 00:10:17.945 Copy: Supported 00:10:17.945 Volatile Write Cache: Present 00:10:17.945 Atomic Write Unit (Normal): 1 00:10:17.945 Atomic Write Unit (PFail): 1 00:10:17.945 Atomic Compare & Write Unit: 1 00:10:17.945 Fused Compare & Write: Not Supported 00:10:17.945 Scatter-Gather List 00:10:17.945 SGL Command Set: Supported 00:10:17.945 SGL Keyed: Not Supported 00:10:17.945 SGL Bit Bucket Descriptor: Not Supported 00:10:17.945 SGL Metadata Pointer: Not Supported 00:10:17.945 Oversized SGL: Not Supported 00:10:17.945 SGL Metadata Address: Not Supported 00:10:17.945 SGL Offset: Not Supported 00:10:17.945 Transport SGL Data Block: Not Supported 00:10:17.945 Replay Protected Memory Block: Not Supported 00:10:17.945 00:10:17.945 Firmware Slot Information 00:10:17.945 ========================= 00:10:17.945 Active slot: 1 00:10:17.945 Slot 1 Firmware Revision: 1.0 00:10:17.945 00:10:17.945 00:10:17.945 Commands Supported and Effects 00:10:17.945 ============================== 00:10:17.945 Admin Commands 00:10:17.945 -------------- 00:10:17.945 Delete I/O Submission Queue (00h): Supported 00:10:17.945 Create I/O Submission Queue (01h): Supported 00:10:17.945 Get Log Page (02h): Supported 00:10:17.945 Delete I/O Completion Queue (04h): Supported 00:10:17.945 Create I/O Completion Queue (05h): Supported 00:10:17.945 Identify (06h): Supported 00:10:17.945 Abort (08h): Supported 00:10:17.945 Set Features (09h): Supported 00:10:17.945 Get Features (0Ah): Supported 00:10:17.945 Asynchronous Event Request (0Ch): Supported 00:10:17.945 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:17.945 Directive Send (19h): Supported 00:10:17.945 Directive Receive (1Ah): Supported 00:10:17.945 Virtualization Management (1Ch): Supported 00:10:17.945 Doorbell Buffer Config (7Ch): Supported 00:10:17.945 Format NVM (80h): Supported LBA-Change 00:10:17.945 I/O Commands 00:10:17.945 ------------ 00:10:17.945 Flush (00h): Supported LBA-Change 00:10:17.945 Write (01h): Supported LBA-Change 00:10:17.945 Read (02h): Supported 00:10:17.945 Compare (05h): Supported 00:10:17.945 Write Zeroes (08h): Supported LBA-Change 00:10:17.945 Dataset Management (09h): Supported LBA-Change 00:10:17.945 Unknown (0Ch): Supported 00:10:17.945 Unknown (12h): Supported 00:10:17.945 Copy (19h): Supported LBA-Change 00:10:17.945 Unknown (1Dh): Supported LBA-Change 00:10:17.945 00:10:17.945 Error Log 00:10:17.945 ========= 00:10:17.945 00:10:17.945 Arbitration 00:10:17.945 =========== 00:10:17.945 Arbitration Burst: no limit 00:10:17.945 00:10:17.945 Power Management 00:10:17.945 ================ 00:10:17.945 Number of Power States: 1 00:10:17.945 Current Power State: Power State #0 00:10:17.945 Power State #0: 00:10:17.945 Max Power: 25.00 W 00:10:17.945 Non-Operational State: Operational 00:10:17.945 Entry Latency: 16 microseconds 00:10:17.945 Exit Latency: 4 microseconds 00:10:17.945 Relative Read Throughput: 0 00:10:17.945 Relative Read Latency: 0 00:10:17.945 Relative Write Throughput: 0 00:10:17.945 Relative Write Latency: 0 00:10:17.945 Idle Power: Not Reported 00:10:17.945 Active Power: Not Reported 00:10:17.945 Non-Operational Permissive Mode: Not Supported 00:10:17.945 00:10:17.945 Health Information 00:10:17.945 ================== 00:10:17.945 Critical Warnings: 00:10:17.945 Available Spare Space: OK 00:10:17.945 Temperature: OK 00:10:17.945 Device Reliability: OK 00:10:17.945 Read Only: No 00:10:17.945 Volatile Memory Backup: OK 00:10:17.945 Current Temperature: 323 Kelvin (50 Celsius) 00:10:17.945 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:17.945 Available Spare: 0% 00:10:17.945 Available Spare Threshold: 0% 00:10:17.945 Life Percentage Used: 0% 00:10:17.945 Data Units Read: 2459 00:10:17.945 Data Units Written: 2247 00:10:17.945 Host Read Commands: 97094 00:10:17.945 Host Write Commands: 95364 00:10:17.945 Controller Busy Time: 0 minutes 00:10:17.945 Power Cycles: 0 00:10:17.945 Power On Hours: 0 hours 00:10:17.945 Unsafe Shutdowns: 0 00:10:17.945 Unrecoverable Media Errors: 0 00:10:17.945 Lifetime Error Log Entries: 0 00:10:17.945 Warning Temperature Time: 0 minutes 00:10:17.945 Critical Temperature Time: 0 minutes 00:10:17.945 00:10:17.945 Number of Queues 00:10:17.945 ================ 00:10:17.945 Number of I/O Submission Queues: 64 00:10:17.945 Number of I/O Completion Queues: 64 00:10:17.945 00:10:17.945 ZNS Specific Controller Data 00:10:17.945 ============================ 00:10:17.945 Zone Append Size Limit: 0 00:10:17.945 00:10:17.945 00:10:17.945 Active Namespaces 00:10:17.945 ================= 00:10:17.945 Namespace ID:1 00:10:17.945 Error Recovery Timeout: Unlimited 00:10:17.945 Command Set Identifier: NVM (00h) 00:10:17.945 Deallocate: Supported 00:10:17.945 Deallocated/Unwritten Error: Supported 00:10:17.945 Deallocated Read Value: All 0x00 00:10:17.945 Deallocate in Write Zeroes: Not Supported 00:10:17.945 Deallocated Guard Field: 0xFFFF 00:10:17.945 Flush: Supported 00:10:17.945 Reservation: Not Supported 00:10:17.945 Namespace Sharing Capabilities: Private 00:10:17.945 Size (in LBAs): 1048576 (4GiB) 00:10:17.945 Capacity (in LBAs): 1048576 (4GiB) 00:10:17.945 Utilization (in LBAs): 1048576 (4GiB) 00:10:17.945 Thin Provisioning: Not Supported 00:10:17.945 Per-NS Atomic Units: No 00:10:17.945 Maximum Single Source Range Length: 128 00:10:17.945 Maximum Copy Length: 128 00:10:17.945 Maximum Source Range Count: 128 00:10:17.945 NGUID/EUI64 Never Reused: No 00:10:17.945 Namespace Write Protected: No 00:10:17.945 Number of LBA Formats: 8 00:10:17.945 Current LBA Format: LBA Format #04 00:10:17.945 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:17.945 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:17.945 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:17.945 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:17.945 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:17.945 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:17.945 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:17.945 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:17.945 00:10:17.945 NVM Specific Namespace Data 00:10:17.945 =========================== 00:10:17.945 Logical Block Storage Tag Mask: 0 00:10:17.945 Protection Information Capabilities: 00:10:17.945 16b Guard Protection Information Storage Tag Support: No 00:10:17.945 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:17.945 Storage Tag Check Read Support: No 00:10:17.945 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.945 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.945 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.945 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.945 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.945 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.945 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.945 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.945 Namespace ID:2 00:10:17.945 Error Recovery Timeout: Unlimited 00:10:17.945 Command Set Identifier: NVM (00h) 00:10:17.945 Deallocate: Supported 00:10:17.945 Deallocated/Unwritten Error: Supported 00:10:17.945 Deallocated Read Value: All 0x00 00:10:17.945 Deallocate in Write Zeroes: Not Supported 00:10:17.945 Deallocated Guard Field: 0xFFFF 00:10:17.945 Flush: Supported 00:10:17.945 Reservation: Not Supported 00:10:17.945 Namespace Sharing Capabilities: Private 00:10:17.945 Size (in LBAs): 1048576 (4GiB) 00:10:17.945 Capacity (in LBAs): 1048576 (4GiB) 00:10:17.945 Utilization (in LBAs): 1048576 (4GiB) 00:10:17.945 Thin Provisioning: Not Supported 00:10:17.946 Per-NS Atomic Units: No 00:10:17.946 Maximum Single Source Range Length: 128 00:10:17.946 Maximum Copy Length: 128 00:10:17.946 Maximum Source Range Count: 128 00:10:17.946 NGUID/EUI64 Never Reused: No 00:10:17.946 Namespace Write Protected: No 00:10:17.946 Number of LBA Formats: 8 00:10:17.946 Current LBA Format: LBA Format #04 00:10:17.946 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:17.946 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:17.946 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:17.946 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:17.946 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:17.946 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:17.946 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:17.946 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:17.946 00:10:17.946 NVM Specific Namespace Data 00:10:17.946 =========================== 00:10:17.946 Logical Block Storage Tag Mask: 0 00:10:17.946 Protection Information Capabilities: 00:10:17.946 16b Guard Protection Information Storage Tag Support: No 00:10:17.946 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:17.946 Storage Tag Check Read Support: No 00:10:17.946 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.946 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.946 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.946 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.946 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.946 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.946 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.946 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:17.946 Namespace ID:3 00:10:17.946 Error Recovery Timeout: Unlimited 00:10:17.946 Command Set Identifier: NVM (00h) 00:10:17.946 Deallocate: Supported 00:10:17.946 Deallocated/Unwritten Error: Supported 00:10:17.946 Deallocated Read Value: All 0x00 00:10:17.946 Deallocate in Write Zeroes: Not Supported 00:10:17.946 Deallocated Guard Field: 0xFFFF 00:10:17.946 Flush: Supported 00:10:17.946 Reservation: Not Supported 00:10:17.946 Namespace Sharing Capabilities: Private 00:10:17.946 Size (in LBAs): 1048576 (4GiB) 00:10:17.946 Capacity (in LBAs): 1048576 (4GiB) 00:10:17.946 Utilization (in LBAs): 1048576 (4GiB) 00:10:17.946 Thin Provisioning: Not Supported 00:10:17.946 Per-NS Atomic Units: No 00:10:17.946 Maximum Single Source Range Length: 128 00:10:17.946 Maximum Copy Length: 128 00:10:17.946 Maximum Source Range Count: 128 00:10:17.946 NGUID/EUI64 Never Reused: No 00:10:17.946 Namespace Write Protected: No 00:10:17.946 Number of LBA Formats: 8 00:10:17.946 Current LBA Format: LBA Format #04 00:10:17.946 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:17.946 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:17.946 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:17.946 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:17.946 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:17.946 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:17.946 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:17.946 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:17.946 00:10:17.946 NVM Specific Namespace Data 00:10:17.946 =========================== 00:10:17.946 Logical Block Storage Tag Mask: 0 00:10:17.946 Protection Information Capabilities: 00:10:17.946 16b Guard Protection Information Storage Tag Support: No 00:10:17.946 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:18.205 Storage Tag Check Read Support: No 00:10:18.205 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.205 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.205 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.205 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.205 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.205 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.205 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.205 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.205 16:04:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:18.205 16:04:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:18.464 ===================================================== 00:10:18.464 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:18.464 ===================================================== 00:10:18.464 Controller Capabilities/Features 00:10:18.464 ================================ 00:10:18.464 Vendor ID: 1b36 00:10:18.464 Subsystem Vendor ID: 1af4 00:10:18.464 Serial Number: 12343 00:10:18.464 Model Number: QEMU NVMe Ctrl 00:10:18.464 Firmware Version: 8.0.0 00:10:18.464 Recommended Arb Burst: 6 00:10:18.464 IEEE OUI Identifier: 00 54 52 00:10:18.464 Multi-path I/O 00:10:18.464 May have multiple subsystem ports: No 00:10:18.464 May have multiple controllers: Yes 00:10:18.464 Associated with SR-IOV VF: No 00:10:18.464 Max Data Transfer Size: 524288 00:10:18.464 Max Number of Namespaces: 256 00:10:18.464 Max Number of I/O Queues: 64 00:10:18.464 NVMe Specification Version (VS): 1.4 00:10:18.464 NVMe Specification Version (Identify): 1.4 00:10:18.464 Maximum Queue Entries: 2048 00:10:18.464 Contiguous Queues Required: Yes 00:10:18.464 Arbitration Mechanisms Supported 00:10:18.464 Weighted Round Robin: Not Supported 00:10:18.464 Vendor Specific: Not Supported 00:10:18.464 Reset Timeout: 7500 ms 00:10:18.464 Doorbell Stride: 4 bytes 00:10:18.464 NVM Subsystem Reset: Not Supported 00:10:18.464 Command Sets Supported 00:10:18.464 NVM Command Set: Supported 00:10:18.464 Boot Partition: Not Supported 00:10:18.464 Memory Page Size Minimum: 4096 bytes 00:10:18.464 Memory Page Size Maximum: 65536 bytes 00:10:18.464 Persistent Memory Region: Not Supported 00:10:18.464 Optional Asynchronous Events Supported 00:10:18.464 Namespace Attribute Notices: Supported 00:10:18.464 Firmware Activation Notices: Not Supported 00:10:18.464 ANA Change Notices: Not Supported 00:10:18.464 PLE Aggregate Log Change Notices: Not Supported 00:10:18.464 LBA Status Info Alert Notices: Not Supported 00:10:18.464 EGE Aggregate Log Change Notices: Not Supported 00:10:18.464 Normal NVM Subsystem Shutdown event: Not Supported 00:10:18.464 Zone Descriptor Change Notices: Not Supported 00:10:18.464 Discovery Log Change Notices: Not Supported 00:10:18.464 Controller Attributes 00:10:18.464 128-bit Host Identifier: Not Supported 00:10:18.464 Non-Operational Permissive Mode: Not Supported 00:10:18.464 NVM Sets: Not Supported 00:10:18.464 Read Recovery Levels: Not Supported 00:10:18.464 Endurance Groups: Supported 00:10:18.465 Predictable Latency Mode: Not Supported 00:10:18.465 Traffic Based Keep ALive: Not Supported 00:10:18.465 Namespace Granularity: Not Supported 00:10:18.465 SQ Associations: Not Supported 00:10:18.465 UUID List: Not Supported 00:10:18.465 Multi-Domain Subsystem: Not Supported 00:10:18.465 Fixed Capacity Management: Not Supported 00:10:18.465 Variable Capacity Management: Not Supported 00:10:18.465 Delete Endurance Group: Not Supported 00:10:18.465 Delete NVM Set: Not Supported 00:10:18.465 Extended LBA Formats Supported: Supported 00:10:18.465 Flexible Data Placement Supported: Supported 00:10:18.465 00:10:18.465 Controller Memory Buffer Support 00:10:18.465 ================================ 00:10:18.465 Supported: No 00:10:18.465 00:10:18.465 Persistent Memory Region Support 00:10:18.465 ================================ 00:10:18.465 Supported: No 00:10:18.465 00:10:18.465 Admin Command Set Attributes 00:10:18.465 ============================ 00:10:18.465 Security Send/Receive: Not Supported 00:10:18.465 Format NVM: Supported 00:10:18.465 Firmware Activate/Download: Not Supported 00:10:18.465 Namespace Management: Supported 00:10:18.465 Device Self-Test: Not Supported 00:10:18.465 Directives: Supported 00:10:18.465 NVMe-MI: Not Supported 00:10:18.465 Virtualization Management: Not Supported 00:10:18.465 Doorbell Buffer Config: Supported 00:10:18.465 Get LBA Status Capability: Not Supported 00:10:18.465 Command & Feature Lockdown Capability: Not Supported 00:10:18.465 Abort Command Limit: 4 00:10:18.465 Async Event Request Limit: 4 00:10:18.465 Number of Firmware Slots: N/A 00:10:18.465 Firmware Slot 1 Read-Only: N/A 00:10:18.465 Firmware Activation Without Reset: N/A 00:10:18.465 Multiple Update Detection Support: N/A 00:10:18.465 Firmware Update Granularity: No Information Provided 00:10:18.465 Per-Namespace SMART Log: Yes 00:10:18.465 Asymmetric Namespace Access Log Page: Not Supported 00:10:18.465 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:18.465 Command Effects Log Page: Supported 00:10:18.465 Get Log Page Extended Data: Supported 00:10:18.465 Telemetry Log Pages: Not Supported 00:10:18.465 Persistent Event Log Pages: Not Supported 00:10:18.465 Supported Log Pages Log Page: May Support 00:10:18.465 Commands Supported & Effects Log Page: Not Supported 00:10:18.465 Feature Identifiers & Effects Log Page:May Support 00:10:18.465 NVMe-MI Commands & Effects Log Page: May Support 00:10:18.465 Data Area 4 for Telemetry Log: Not Supported 00:10:18.465 Error Log Page Entries Supported: 1 00:10:18.465 Keep Alive: Not Supported 00:10:18.465 00:10:18.465 NVM Command Set Attributes 00:10:18.465 ========================== 00:10:18.465 Submission Queue Entry Size 00:10:18.465 Max: 64 00:10:18.465 Min: 64 00:10:18.465 Completion Queue Entry Size 00:10:18.465 Max: 16 00:10:18.465 Min: 16 00:10:18.465 Number of Namespaces: 256 00:10:18.465 Compare Command: Supported 00:10:18.465 Write Uncorrectable Command: Not Supported 00:10:18.465 Dataset Management Command: Supported 00:10:18.465 Write Zeroes Command: Supported 00:10:18.465 Set Features Save Field: Supported 00:10:18.465 Reservations: Not Supported 00:10:18.465 Timestamp: Supported 00:10:18.465 Copy: Supported 00:10:18.465 Volatile Write Cache: Present 00:10:18.465 Atomic Write Unit (Normal): 1 00:10:18.465 Atomic Write Unit (PFail): 1 00:10:18.465 Atomic Compare & Write Unit: 1 00:10:18.465 Fused Compare & Write: Not Supported 00:10:18.465 Scatter-Gather List 00:10:18.465 SGL Command Set: Supported 00:10:18.465 SGL Keyed: Not Supported 00:10:18.465 SGL Bit Bucket Descriptor: Not Supported 00:10:18.465 SGL Metadata Pointer: Not Supported 00:10:18.465 Oversized SGL: Not Supported 00:10:18.465 SGL Metadata Address: Not Supported 00:10:18.465 SGL Offset: Not Supported 00:10:18.465 Transport SGL Data Block: Not Supported 00:10:18.465 Replay Protected Memory Block: Not Supported 00:10:18.465 00:10:18.465 Firmware Slot Information 00:10:18.465 ========================= 00:10:18.465 Active slot: 1 00:10:18.465 Slot 1 Firmware Revision: 1.0 00:10:18.465 00:10:18.465 00:10:18.465 Commands Supported and Effects 00:10:18.465 ============================== 00:10:18.465 Admin Commands 00:10:18.465 -------------- 00:10:18.465 Delete I/O Submission Queue (00h): Supported 00:10:18.465 Create I/O Submission Queue (01h): Supported 00:10:18.465 Get Log Page (02h): Supported 00:10:18.465 Delete I/O Completion Queue (04h): Supported 00:10:18.465 Create I/O Completion Queue (05h): Supported 00:10:18.465 Identify (06h): Supported 00:10:18.465 Abort (08h): Supported 00:10:18.465 Set Features (09h): Supported 00:10:18.465 Get Features (0Ah): Supported 00:10:18.465 Asynchronous Event Request (0Ch): Supported 00:10:18.465 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:18.465 Directive Send (19h): Supported 00:10:18.465 Directive Receive (1Ah): Supported 00:10:18.465 Virtualization Management (1Ch): Supported 00:10:18.465 Doorbell Buffer Config (7Ch): Supported 00:10:18.465 Format NVM (80h): Supported LBA-Change 00:10:18.465 I/O Commands 00:10:18.465 ------------ 00:10:18.465 Flush (00h): Supported LBA-Change 00:10:18.465 Write (01h): Supported LBA-Change 00:10:18.465 Read (02h): Supported 00:10:18.465 Compare (05h): Supported 00:10:18.465 Write Zeroes (08h): Supported LBA-Change 00:10:18.465 Dataset Management (09h): Supported LBA-Change 00:10:18.465 Unknown (0Ch): Supported 00:10:18.465 Unknown (12h): Supported 00:10:18.465 Copy (19h): Supported LBA-Change 00:10:18.465 Unknown (1Dh): Supported LBA-Change 00:10:18.465 00:10:18.465 Error Log 00:10:18.465 ========= 00:10:18.465 00:10:18.465 Arbitration 00:10:18.465 =========== 00:10:18.465 Arbitration Burst: no limit 00:10:18.465 00:10:18.465 Power Management 00:10:18.465 ================ 00:10:18.466 Number of Power States: 1 00:10:18.466 Current Power State: Power State #0 00:10:18.466 Power State #0: 00:10:18.466 Max Power: 25.00 W 00:10:18.466 Non-Operational State: Operational 00:10:18.466 Entry Latency: 16 microseconds 00:10:18.466 Exit Latency: 4 microseconds 00:10:18.466 Relative Read Throughput: 0 00:10:18.466 Relative Read Latency: 0 00:10:18.466 Relative Write Throughput: 0 00:10:18.466 Relative Write Latency: 0 00:10:18.466 Idle Power: Not Reported 00:10:18.466 Active Power: Not Reported 00:10:18.466 Non-Operational Permissive Mode: Not Supported 00:10:18.466 00:10:18.466 Health Information 00:10:18.466 ================== 00:10:18.466 Critical Warnings: 00:10:18.466 Available Spare Space: OK 00:10:18.466 Temperature: OK 00:10:18.466 Device Reliability: OK 00:10:18.466 Read Only: No 00:10:18.466 Volatile Memory Backup: OK 00:10:18.466 Current Temperature: 323 Kelvin (50 Celsius) 00:10:18.466 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:18.466 Available Spare: 0% 00:10:18.466 Available Spare Threshold: 0% 00:10:18.466 Life Percentage Used: 0% 00:10:18.466 Data Units Read: 1058 00:10:18.466 Data Units Written: 987 00:10:18.466 Host Read Commands: 34261 00:10:18.466 Host Write Commands: 33684 00:10:18.466 Controller Busy Time: 0 minutes 00:10:18.466 Power Cycles: 0 00:10:18.466 Power On Hours: 0 hours 00:10:18.466 Unsafe Shutdowns: 0 00:10:18.466 Unrecoverable Media Errors: 0 00:10:18.466 Lifetime Error Log Entries: 0 00:10:18.466 Warning Temperature Time: 0 minutes 00:10:18.466 Critical Temperature Time: 0 minutes 00:10:18.466 00:10:18.466 Number of Queues 00:10:18.466 ================ 00:10:18.466 Number of I/O Submission Queues: 64 00:10:18.466 Number of I/O Completion Queues: 64 00:10:18.466 00:10:18.466 ZNS Specific Controller Data 00:10:18.466 ============================ 00:10:18.466 Zone Append Size Limit: 0 00:10:18.466 00:10:18.466 00:10:18.466 Active Namespaces 00:10:18.466 ================= 00:10:18.466 Namespace ID:1 00:10:18.466 Error Recovery Timeout: Unlimited 00:10:18.466 Command Set Identifier: NVM (00h) 00:10:18.466 Deallocate: Supported 00:10:18.466 Deallocated/Unwritten Error: Supported 00:10:18.466 Deallocated Read Value: All 0x00 00:10:18.466 Deallocate in Write Zeroes: Not Supported 00:10:18.466 Deallocated Guard Field: 0xFFFF 00:10:18.466 Flush: Supported 00:10:18.466 Reservation: Not Supported 00:10:18.466 Namespace Sharing Capabilities: Multiple Controllers 00:10:18.466 Size (in LBAs): 262144 (1GiB) 00:10:18.466 Capacity (in LBAs): 262144 (1GiB) 00:10:18.466 Utilization (in LBAs): 262144 (1GiB) 00:10:18.466 Thin Provisioning: Not Supported 00:10:18.466 Per-NS Atomic Units: No 00:10:18.466 Maximum Single Source Range Length: 128 00:10:18.466 Maximum Copy Length: 128 00:10:18.466 Maximum Source Range Count: 128 00:10:18.466 NGUID/EUI64 Never Reused: No 00:10:18.466 Namespace Write Protected: No 00:10:18.466 Endurance group ID: 1 00:10:18.466 Number of LBA Formats: 8 00:10:18.466 Current LBA Format: LBA Format #04 00:10:18.466 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:18.466 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:18.466 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:18.466 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:18.466 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:18.466 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:18.466 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:18.466 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:18.466 00:10:18.466 Get Feature FDP: 00:10:18.466 ================ 00:10:18.466 Enabled: Yes 00:10:18.466 FDP configuration index: 0 00:10:18.466 00:10:18.466 FDP configurations log page 00:10:18.466 =========================== 00:10:18.466 Number of FDP configurations: 1 00:10:18.466 Version: 0 00:10:18.466 Size: 112 00:10:18.466 FDP Configuration Descriptor: 0 00:10:18.466 Descriptor Size: 96 00:10:18.466 Reclaim Group Identifier format: 2 00:10:18.466 FDP Volatile Write Cache: Not Present 00:10:18.466 FDP Configuration: Valid 00:10:18.466 Vendor Specific Size: 0 00:10:18.466 Number of Reclaim Groups: 2 00:10:18.466 Number of Recalim Unit Handles: 8 00:10:18.466 Max Placement Identifiers: 128 00:10:18.466 Number of Namespaces Suppprted: 256 00:10:18.466 Reclaim unit Nominal Size: 6000000 bytes 00:10:18.466 Estimated Reclaim Unit Time Limit: Not Reported 00:10:18.466 RUH Desc #000: RUH Type: Initially Isolated 00:10:18.466 RUH Desc #001: RUH Type: Initially Isolated 00:10:18.466 RUH Desc #002: RUH Type: Initially Isolated 00:10:18.466 RUH Desc #003: RUH Type: Initially Isolated 00:10:18.466 RUH Desc #004: RUH Type: Initially Isolated 00:10:18.466 RUH Desc #005: RUH Type: Initially Isolated 00:10:18.466 RUH Desc #006: RUH Type: Initially Isolated 00:10:18.466 RUH Desc #007: RUH Type: Initially Isolated 00:10:18.466 00:10:18.466 FDP reclaim unit handle usage log page 00:10:18.466 ====================================== 00:10:18.466 Number of Reclaim Unit Handles: 8 00:10:18.466 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:18.466 RUH Usage Desc #001: RUH Attributes: Unused 00:10:18.466 RUH Usage Desc #002: RUH Attributes: Unused 00:10:18.466 RUH Usage Desc #003: RUH Attributes: Unused 00:10:18.466 RUH Usage Desc #004: RUH Attributes: Unused 00:10:18.466 RUH Usage Desc #005: RUH Attributes: Unused 00:10:18.466 RUH Usage Desc #006: RUH Attributes: Unused 00:10:18.466 RUH Usage Desc #007: RUH Attributes: Unused 00:10:18.466 00:10:18.466 FDP statistics log page 00:10:18.466 ======================= 00:10:18.466 Host bytes with metadata written: 584572928 00:10:18.466 Media bytes with metadata written: 584650752 00:10:18.466 Media bytes erased: 0 00:10:18.466 00:10:18.466 FDP events log page 00:10:18.466 =================== 00:10:18.466 Number of FDP events: 0 00:10:18.466 00:10:18.466 NVM Specific Namespace Data 00:10:18.466 =========================== 00:10:18.466 Logical Block Storage Tag Mask: 0 00:10:18.466 Protection Information Capabilities: 00:10:18.466 16b Guard Protection Information Storage Tag Support: No 00:10:18.466 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:18.466 Storage Tag Check Read Support: No 00:10:18.466 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.466 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.466 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.466 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.466 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.466 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.466 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.466 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:18.466 00:10:18.466 real 0m1.744s 00:10:18.466 user 0m0.670s 00:10:18.466 sys 0m0.865s 00:10:18.466 16:04:37 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:18.466 16:04:37 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:18.466 ************************************ 00:10:18.466 END TEST nvme_identify 00:10:18.466 ************************************ 00:10:18.466 16:04:37 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:18.467 16:04:37 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:18.467 16:04:37 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:18.467 16:04:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:18.467 ************************************ 00:10:18.467 START TEST nvme_perf 00:10:18.467 ************************************ 00:10:18.467 16:04:37 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:10:18.467 16:04:37 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:19.843 Initializing NVMe Controllers 00:10:19.843 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:19.843 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:19.843 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:19.843 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:19.843 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:19.843 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:19.843 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:19.843 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:19.843 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:19.843 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:19.843 Initialization complete. Launching workers. 00:10:19.843 ======================================================== 00:10:19.843 Latency(us) 00:10:19.843 Device Information : IOPS MiB/s Average min max 00:10:19.843 PCIE (0000:00:10.0) NSID 1 from core 0: 13687.76 160.40 9378.66 7973.06 47075.61 00:10:19.843 PCIE (0000:00:11.0) NSID 1 from core 0: 13687.76 160.40 9364.91 8072.41 45072.75 00:10:19.843 PCIE (0000:00:13.0) NSID 1 from core 0: 13687.76 160.40 9350.51 8061.80 43623.56 00:10:19.843 PCIE (0000:00:12.0) NSID 1 from core 0: 13687.76 160.40 9335.88 8095.18 41631.29 00:10:19.843 PCIE (0000:00:12.0) NSID 2 from core 0: 13687.76 160.40 9321.07 8048.47 39838.97 00:10:19.843 PCIE (0000:00:12.0) NSID 3 from core 0: 13751.72 161.15 9263.12 8004.13 32724.81 00:10:19.843 ======================================================== 00:10:19.843 Total : 82190.52 963.17 9335.64 7973.06 47075.61 00:10:19.843 00:10:19.843 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:19.843 ================================================================================= 00:10:19.843 1.00000% : 8211.740us 00:10:19.843 10.00000% : 8527.576us 00:10:19.843 25.00000% : 8738.133us 00:10:19.844 50.00000% : 9001.330us 00:10:19.844 75.00000% : 9369.806us 00:10:19.844 90.00000% : 9685.642us 00:10:19.844 95.00000% : 10212.035us 00:10:19.844 98.00000% : 11317.462us 00:10:19.844 99.00000% : 13317.757us 00:10:19.844 99.50000% : 39374.239us 00:10:19.844 99.90000% : 46743.749us 00:10:19.844 99.99000% : 47164.864us 00:10:19.844 99.99900% : 47164.864us 00:10:19.844 99.99990% : 47164.864us 00:10:19.844 99.99999% : 47164.864us 00:10:19.844 00:10:19.844 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:19.844 ================================================================================= 00:10:19.844 1.00000% : 8317.018us 00:10:19.844 10.00000% : 8580.215us 00:10:19.844 25.00000% : 8738.133us 00:10:19.844 50.00000% : 9001.330us 00:10:19.844 75.00000% : 9317.166us 00:10:19.844 90.00000% : 9685.642us 00:10:19.844 95.00000% : 10212.035us 00:10:19.844 98.00000% : 11264.822us 00:10:19.844 99.00000% : 13580.954us 00:10:19.844 99.50000% : 37689.780us 00:10:19.844 99.90000% : 44848.733us 00:10:19.844 99.99000% : 45059.290us 00:10:19.844 99.99900% : 45269.847us 00:10:19.844 99.99990% : 45269.847us 00:10:19.844 99.99999% : 45269.847us 00:10:19.844 00:10:19.844 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:19.844 ================================================================================= 00:10:19.844 1.00000% : 8317.018us 00:10:19.844 10.00000% : 8527.576us 00:10:19.844 25.00000% : 8738.133us 00:10:19.844 50.00000% : 9001.330us 00:10:19.844 75.00000% : 9317.166us 00:10:19.844 90.00000% : 9685.642us 00:10:19.844 95.00000% : 10264.675us 00:10:19.844 98.00000% : 11106.904us 00:10:19.844 99.00000% : 13423.036us 00:10:19.844 99.50000% : 36847.550us 00:10:19.844 99.90000% : 43374.831us 00:10:19.844 99.99000% : 43795.945us 00:10:19.844 99.99900% : 43795.945us 00:10:19.844 99.99990% : 43795.945us 00:10:19.844 99.99999% : 43795.945us 00:10:19.844 00:10:19.844 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:19.844 ================================================================================= 00:10:19.844 1.00000% : 8317.018us 00:10:19.844 10.00000% : 8527.576us 00:10:19.844 25.00000% : 8738.133us 00:10:19.844 50.00000% : 9001.330us 00:10:19.844 75.00000% : 9317.166us 00:10:19.844 90.00000% : 9685.642us 00:10:19.844 95.00000% : 10264.675us 00:10:19.844 98.00000% : 11212.183us 00:10:19.844 99.00000% : 13686.233us 00:10:19.844 99.50000% : 34952.533us 00:10:19.844 99.90000% : 41479.814us 00:10:19.844 99.99000% : 41690.371us 00:10:19.844 99.99900% : 41690.371us 00:10:19.844 99.99990% : 41690.371us 00:10:19.844 99.99999% : 41690.371us 00:10:19.844 00:10:19.844 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:19.844 ================================================================================= 00:10:19.844 1.00000% : 8317.018us 00:10:19.844 10.00000% : 8527.576us 00:10:19.844 25.00000% : 8738.133us 00:10:19.844 50.00000% : 9001.330us 00:10:19.844 75.00000% : 9317.166us 00:10:19.844 90.00000% : 9685.642us 00:10:19.844 95.00000% : 10159.396us 00:10:19.844 98.00000% : 11317.462us 00:10:19.844 99.00000% : 14002.069us 00:10:19.844 99.50000% : 33268.074us 00:10:19.844 99.90000% : 39584.797us 00:10:19.844 99.99000% : 40005.912us 00:10:19.844 99.99900% : 40005.912us 00:10:19.844 99.99990% : 40005.912us 00:10:19.844 99.99999% : 40005.912us 00:10:19.844 00:10:19.844 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:19.844 ================================================================================= 00:10:19.844 1.00000% : 8317.018us 00:10:19.844 10.00000% : 8580.215us 00:10:19.844 25.00000% : 8738.133us 00:10:19.844 50.00000% : 9001.330us 00:10:19.844 75.00000% : 9317.166us 00:10:19.844 90.00000% : 9685.642us 00:10:19.844 95.00000% : 10212.035us 00:10:19.844 98.00000% : 11475.380us 00:10:19.844 99.00000% : 14212.627us 00:10:19.844 99.50000% : 25266.892us 00:10:19.844 99.90000% : 32425.844us 00:10:19.844 99.99000% : 32846.959us 00:10:19.844 99.99900% : 32846.959us 00:10:19.844 99.99990% : 32846.959us 00:10:19.844 99.99999% : 32846.959us 00:10:19.844 00:10:19.844 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:19.844 ============================================================================== 00:10:19.844 Range in us Cumulative IO count 00:10:19.844 7948.543 - 8001.182: 0.0365% ( 5) 00:10:19.844 8001.182 - 8053.822: 0.0657% ( 4) 00:10:19.844 8053.822 - 8106.461: 0.1825% ( 16) 00:10:19.844 8106.461 - 8159.100: 0.5695% ( 53) 00:10:19.844 8159.100 - 8211.740: 1.2923% ( 99) 00:10:19.844 8211.740 - 8264.379: 2.3657% ( 147) 00:10:19.844 8264.379 - 8317.018: 3.8113% ( 198) 00:10:19.844 8317.018 - 8369.658: 5.5345% ( 236) 00:10:19.844 8369.658 - 8422.297: 7.4839% ( 267) 00:10:19.844 8422.297 - 8474.937: 9.8496% ( 324) 00:10:19.844 8474.937 - 8527.576: 12.8359% ( 409) 00:10:19.844 8527.576 - 8580.215: 16.3040% ( 475) 00:10:19.844 8580.215 - 8632.855: 19.9693% ( 502) 00:10:19.844 8632.855 - 8685.494: 23.9413% ( 544) 00:10:19.844 8685.494 - 8738.133: 28.2491% ( 590) 00:10:19.844 8738.133 - 8790.773: 32.4693% ( 578) 00:10:19.844 8790.773 - 8843.412: 36.7480% ( 586) 00:10:19.844 8843.412 - 8896.051: 41.3040% ( 624) 00:10:19.844 8896.051 - 8948.691: 45.6411% ( 594) 00:10:19.844 8948.691 - 9001.330: 50.1825% ( 622) 00:10:19.844 9001.330 - 9053.969: 54.7167% ( 621) 00:10:19.844 9053.969 - 9106.609: 59.1414% ( 606) 00:10:19.844 9106.609 - 9159.248: 63.4419% ( 589) 00:10:19.844 9159.248 - 9211.888: 67.4796% ( 553) 00:10:19.844 9211.888 - 9264.527: 71.2836% ( 521) 00:10:19.844 9264.527 - 9317.166: 74.6860% ( 466) 00:10:19.844 9317.166 - 9369.806: 78.0374% ( 459) 00:10:19.844 9369.806 - 9422.445: 80.9579% ( 400) 00:10:19.844 9422.445 - 9475.084: 83.7325% ( 380) 00:10:19.844 9475.084 - 9527.724: 85.8207% ( 286) 00:10:19.844 9527.724 - 9580.363: 87.7044% ( 258) 00:10:19.844 9580.363 - 9633.002: 89.0333% ( 182) 00:10:19.844 9633.002 - 9685.642: 90.1212% ( 149) 00:10:19.844 9685.642 - 9738.281: 90.9828% ( 118) 00:10:19.844 9738.281 - 9790.920: 91.6983% ( 98) 00:10:19.844 9790.920 - 9843.560: 92.3043% ( 83) 00:10:19.844 9843.560 - 9896.199: 92.9468% ( 88) 00:10:19.844 9896.199 - 9948.839: 93.4141% ( 64) 00:10:19.844 9948.839 - 10001.478: 93.8595% ( 61) 00:10:19.844 10001.478 - 10054.117: 94.2611% ( 55) 00:10:19.844 10054.117 - 10106.757: 94.6773% ( 57) 00:10:19.844 10106.757 - 10159.396: 94.9839% ( 42) 00:10:19.844 10159.396 - 10212.035: 95.3125% ( 45) 00:10:19.844 10212.035 - 10264.675: 95.5900% ( 38) 00:10:19.844 10264.675 - 10317.314: 95.7871% ( 27) 00:10:19.844 10317.314 - 10369.953: 96.0353% ( 34) 00:10:19.844 10369.953 - 10422.593: 96.2252% ( 26) 00:10:19.844 10422.593 - 10475.232: 96.3931% ( 23) 00:10:19.844 10475.232 - 10527.871: 96.5829% ( 26) 00:10:19.844 10527.871 - 10580.511: 96.7144% ( 18) 00:10:19.844 10580.511 - 10633.150: 96.8239% ( 15) 00:10:19.844 10633.150 - 10685.790: 96.9553% ( 18) 00:10:19.844 10685.790 - 10738.429: 97.0794% ( 17) 00:10:19.844 10738.429 - 10791.068: 97.1817% ( 14) 00:10:19.844 10791.068 - 10843.708: 97.2547% ( 10) 00:10:19.844 10843.708 - 10896.347: 97.3277% ( 10) 00:10:19.844 10896.347 - 10948.986: 97.3934% ( 9) 00:10:19.844 10948.986 - 11001.626: 97.5102% ( 16) 00:10:19.844 11001.626 - 11054.265: 97.5832% ( 10) 00:10:19.844 11054.265 - 11106.904: 97.6782% ( 13) 00:10:19.844 11106.904 - 11159.544: 97.7658% ( 12) 00:10:19.844 11159.544 - 11212.183: 97.8388% ( 10) 00:10:19.844 11212.183 - 11264.822: 97.9337% ( 13) 00:10:19.844 11264.822 - 11317.462: 98.0213% ( 12) 00:10:19.844 11317.462 - 11370.101: 98.1089% ( 12) 00:10:19.844 11370.101 - 11422.741: 98.1820% ( 10) 00:10:19.844 11422.741 - 11475.380: 98.2842% ( 14) 00:10:19.844 11475.380 - 11528.019: 98.3280% ( 6) 00:10:19.844 11528.019 - 11580.659: 98.3791% ( 7) 00:10:19.844 11580.659 - 11633.298: 98.4229% ( 6) 00:10:19.844 11633.298 - 11685.937: 98.4740% ( 7) 00:10:19.844 11685.937 - 11738.577: 98.5032% ( 4) 00:10:19.844 11738.577 - 11791.216: 98.5397% ( 5) 00:10:19.844 11791.216 - 11843.855: 98.5835% ( 6) 00:10:19.844 11843.855 - 11896.495: 98.6273% ( 6) 00:10:19.844 11896.495 - 11949.134: 98.6565% ( 4) 00:10:19.844 11949.134 - 12001.773: 98.6638% ( 1) 00:10:19.844 12001.773 - 12054.413: 98.6857% ( 3) 00:10:19.844 12054.413 - 12107.052: 98.6930% ( 1) 00:10:19.844 12107.052 - 12159.692: 98.7150% ( 3) 00:10:19.844 12159.692 - 12212.331: 98.7223% ( 1) 00:10:19.844 12212.331 - 12264.970: 98.7296% ( 1) 00:10:19.844 12264.970 - 12317.610: 98.7442% ( 2) 00:10:19.844 12317.610 - 12370.249: 98.7588% ( 2) 00:10:19.844 12370.249 - 12422.888: 98.7734% ( 2) 00:10:19.844 12422.888 - 12475.528: 98.7880% ( 2) 00:10:19.844 12475.528 - 12528.167: 98.7953% ( 1) 00:10:19.844 12528.167 - 12580.806: 98.8099% ( 2) 00:10:19.844 12580.806 - 12633.446: 98.8245% ( 2) 00:10:19.844 12633.446 - 12686.085: 98.8391% ( 2) 00:10:19.844 12686.085 - 12738.724: 98.8464% ( 1) 00:10:19.844 12738.724 - 12791.364: 98.8610% ( 2) 00:10:19.844 12791.364 - 12844.003: 98.8756% ( 2) 00:10:19.844 12844.003 - 12896.643: 98.8902% ( 2) 00:10:19.844 12896.643 - 12949.282: 98.9048% ( 2) 00:10:19.844 12949.282 - 13001.921: 98.9267% ( 3) 00:10:19.844 13001.921 - 13054.561: 98.9340% ( 1) 00:10:19.844 13054.561 - 13107.200: 98.9559% ( 3) 00:10:19.844 13107.200 - 13159.839: 98.9632% ( 1) 00:10:19.844 13159.839 - 13212.479: 98.9851% ( 3) 00:10:19.844 13212.479 - 13265.118: 98.9924% ( 1) 00:10:19.845 13265.118 - 13317.757: 99.0216% ( 4) 00:10:19.845 13317.757 - 13370.397: 99.0289% ( 1) 00:10:19.845 13370.397 - 13423.036: 99.0435% ( 2) 00:10:19.845 13423.036 - 13475.676: 99.0654% ( 3) 00:10:19.845 37479.222 - 37689.780: 99.0800% ( 2) 00:10:19.845 37689.780 - 37900.337: 99.1311% ( 7) 00:10:19.845 37900.337 - 38110.895: 99.1895% ( 8) 00:10:19.845 38110.895 - 38321.452: 99.2480% ( 8) 00:10:19.845 38321.452 - 38532.010: 99.2991% ( 7) 00:10:19.845 38532.010 - 38742.567: 99.3575% ( 8) 00:10:19.845 38742.567 - 38953.124: 99.4086% ( 7) 00:10:19.845 38953.124 - 39163.682: 99.4670% ( 8) 00:10:19.845 39163.682 - 39374.239: 99.5108% ( 6) 00:10:19.845 39374.239 - 39584.797: 99.5327% ( 3) 00:10:19.845 45059.290 - 45269.847: 99.5473% ( 2) 00:10:19.845 45269.847 - 45480.405: 99.6057% ( 8) 00:10:19.845 45480.405 - 45690.962: 99.6568% ( 7) 00:10:19.845 45690.962 - 45901.520: 99.7152% ( 8) 00:10:19.845 45901.520 - 46112.077: 99.7664% ( 7) 00:10:19.845 46112.077 - 46322.635: 99.8175% ( 7) 00:10:19.845 46322.635 - 46533.192: 99.8686% ( 7) 00:10:19.845 46533.192 - 46743.749: 99.9197% ( 7) 00:10:19.845 46743.749 - 46954.307: 99.9781% ( 8) 00:10:19.845 46954.307 - 47164.864: 100.0000% ( 3) 00:10:19.845 00:10:19.845 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:19.845 ============================================================================== 00:10:19.845 Range in us Cumulative IO count 00:10:19.845 8053.822 - 8106.461: 0.0730% ( 10) 00:10:19.845 8106.461 - 8159.100: 0.1533% ( 11) 00:10:19.845 8159.100 - 8211.740: 0.3797% ( 31) 00:10:19.845 8211.740 - 8264.379: 0.8616% ( 66) 00:10:19.845 8264.379 - 8317.018: 2.0006% ( 156) 00:10:19.845 8317.018 - 8369.658: 3.2491% ( 171) 00:10:19.845 8369.658 - 8422.297: 5.0745% ( 250) 00:10:19.845 8422.297 - 8474.937: 7.1262% ( 281) 00:10:19.845 8474.937 - 8527.576: 9.7474% ( 359) 00:10:19.845 8527.576 - 8580.215: 13.0257% ( 449) 00:10:19.845 8580.215 - 8632.855: 16.8443% ( 523) 00:10:19.845 8632.855 - 8685.494: 21.1157% ( 585) 00:10:19.845 8685.494 - 8738.133: 25.8032% ( 642) 00:10:19.845 8738.133 - 8790.773: 30.6659% ( 666) 00:10:19.845 8790.773 - 8843.412: 35.8864% ( 715) 00:10:19.845 8843.412 - 8896.051: 41.0704% ( 710) 00:10:19.845 8896.051 - 8948.691: 46.3712% ( 726) 00:10:19.845 8948.691 - 9001.330: 51.4968% ( 702) 00:10:19.845 9001.330 - 9053.969: 56.5567% ( 693) 00:10:19.845 9053.969 - 9106.609: 61.4048% ( 664) 00:10:19.845 9106.609 - 9159.248: 65.9317% ( 620) 00:10:19.845 9159.248 - 9211.888: 70.1227% ( 574) 00:10:19.845 9211.888 - 9264.527: 73.9194% ( 520) 00:10:19.845 9264.527 - 9317.166: 77.5701% ( 500) 00:10:19.845 9317.166 - 9369.806: 80.8119% ( 444) 00:10:19.845 9369.806 - 9422.445: 83.5061% ( 369) 00:10:19.845 9422.445 - 9475.084: 85.6893% ( 299) 00:10:19.845 9475.084 - 9527.724: 87.4635% ( 243) 00:10:19.845 9527.724 - 9580.363: 88.7850% ( 181) 00:10:19.845 9580.363 - 9633.002: 89.8730% ( 149) 00:10:19.845 9633.002 - 9685.642: 90.6396% ( 105) 00:10:19.845 9685.642 - 9738.281: 91.2602% ( 85) 00:10:19.845 9738.281 - 9790.920: 91.8589% ( 82) 00:10:19.845 9790.920 - 9843.560: 92.3773% ( 71) 00:10:19.845 9843.560 - 9896.199: 92.8884% ( 70) 00:10:19.845 9896.199 - 9948.839: 93.3557% ( 64) 00:10:19.845 9948.839 - 10001.478: 93.7719% ( 57) 00:10:19.845 10001.478 - 10054.117: 94.1224% ( 48) 00:10:19.845 10054.117 - 10106.757: 94.4582% ( 46) 00:10:19.845 10106.757 - 10159.396: 94.8087% ( 48) 00:10:19.845 10159.396 - 10212.035: 95.0935% ( 39) 00:10:19.845 10212.035 - 10264.675: 95.3417% ( 34) 00:10:19.845 10264.675 - 10317.314: 95.6119% ( 37) 00:10:19.845 10317.314 - 10369.953: 95.8163% ( 28) 00:10:19.845 10369.953 - 10422.593: 96.0134% ( 27) 00:10:19.845 10422.593 - 10475.232: 96.1522% ( 19) 00:10:19.845 10475.232 - 10527.871: 96.2544% ( 14) 00:10:19.845 10527.871 - 10580.511: 96.4004% ( 20) 00:10:19.845 10580.511 - 10633.150: 96.5537% ( 21) 00:10:19.845 10633.150 - 10685.790: 96.6998% ( 20) 00:10:19.845 10685.790 - 10738.429: 96.8385% ( 19) 00:10:19.845 10738.429 - 10791.068: 96.9991% ( 22) 00:10:19.845 10791.068 - 10843.708: 97.1232% ( 17) 00:10:19.845 10843.708 - 10896.347: 97.2766% ( 21) 00:10:19.845 10896.347 - 10948.986: 97.4080% ( 18) 00:10:19.845 10948.986 - 11001.626: 97.5321% ( 17) 00:10:19.845 11001.626 - 11054.265: 97.6416% ( 15) 00:10:19.845 11054.265 - 11106.904: 97.7585% ( 16) 00:10:19.845 11106.904 - 11159.544: 97.8753% ( 16) 00:10:19.845 11159.544 - 11212.183: 97.9994% ( 17) 00:10:19.845 11212.183 - 11264.822: 98.1089% ( 15) 00:10:19.845 11264.822 - 11317.462: 98.2185% ( 15) 00:10:19.845 11317.462 - 11370.101: 98.3207% ( 14) 00:10:19.845 11370.101 - 11422.741: 98.3937% ( 10) 00:10:19.845 11422.741 - 11475.380: 98.4594% ( 9) 00:10:19.845 11475.380 - 11528.019: 98.4886% ( 4) 00:10:19.845 11528.019 - 11580.659: 98.5105% ( 3) 00:10:19.845 11580.659 - 11633.298: 98.5397% ( 4) 00:10:19.845 11633.298 - 11685.937: 98.5689% ( 4) 00:10:19.845 11685.937 - 11738.577: 98.5908% ( 3) 00:10:19.845 11738.577 - 11791.216: 98.5981% ( 1) 00:10:19.845 12159.692 - 12212.331: 98.6054% ( 1) 00:10:19.845 12212.331 - 12264.970: 98.6200% ( 2) 00:10:19.845 12264.970 - 12317.610: 98.6419% ( 3) 00:10:19.845 12317.610 - 12370.249: 98.6784% ( 5) 00:10:19.845 12370.249 - 12422.888: 98.7004% ( 3) 00:10:19.845 12422.888 - 12475.528: 98.7077% ( 1) 00:10:19.845 12475.528 - 12528.167: 98.7150% ( 1) 00:10:19.845 12528.167 - 12580.806: 98.7296% ( 2) 00:10:19.845 12580.806 - 12633.446: 98.7442% ( 2) 00:10:19.845 12633.446 - 12686.085: 98.7588% ( 2) 00:10:19.845 12686.085 - 12738.724: 98.7880% ( 4) 00:10:19.845 12738.724 - 12791.364: 98.7953% ( 1) 00:10:19.845 12791.364 - 12844.003: 98.8099% ( 2) 00:10:19.845 12844.003 - 12896.643: 98.8245% ( 2) 00:10:19.845 12896.643 - 12949.282: 98.8318% ( 1) 00:10:19.845 12949.282 - 13001.921: 98.8537% ( 3) 00:10:19.845 13001.921 - 13054.561: 98.8683% ( 2) 00:10:19.845 13054.561 - 13107.200: 98.8829% ( 2) 00:10:19.845 13107.200 - 13159.839: 98.8975% ( 2) 00:10:19.845 13159.839 - 13212.479: 98.9194% ( 3) 00:10:19.845 13212.479 - 13265.118: 98.9340% ( 2) 00:10:19.845 13265.118 - 13317.757: 98.9486% ( 2) 00:10:19.845 13317.757 - 13370.397: 98.9632% ( 2) 00:10:19.845 13370.397 - 13423.036: 98.9778% ( 2) 00:10:19.845 13423.036 - 13475.676: 98.9997% ( 3) 00:10:19.845 13475.676 - 13580.954: 99.0289% ( 4) 00:10:19.845 13580.954 - 13686.233: 99.0654% ( 5) 00:10:19.845 36005.320 - 36215.878: 99.1092% ( 6) 00:10:19.845 36215.878 - 36426.435: 99.1603% ( 7) 00:10:19.845 36426.435 - 36636.993: 99.2188% ( 8) 00:10:19.845 36636.993 - 36847.550: 99.2772% ( 8) 00:10:19.845 36847.550 - 37058.108: 99.3356% ( 8) 00:10:19.845 37058.108 - 37268.665: 99.3867% ( 7) 00:10:19.845 37268.665 - 37479.222: 99.4451% ( 8) 00:10:19.845 37479.222 - 37689.780: 99.5108% ( 9) 00:10:19.845 37689.780 - 37900.337: 99.5327% ( 3) 00:10:19.845 43164.273 - 43374.831: 99.5619% ( 4) 00:10:19.845 43374.831 - 43585.388: 99.6130% ( 7) 00:10:19.845 43585.388 - 43795.945: 99.6641% ( 7) 00:10:19.845 43795.945 - 44006.503: 99.7225% ( 8) 00:10:19.845 44006.503 - 44217.060: 99.7810% ( 8) 00:10:19.845 44217.060 - 44427.618: 99.8321% ( 7) 00:10:19.845 44427.618 - 44638.175: 99.8832% ( 7) 00:10:19.845 44638.175 - 44848.733: 99.9343% ( 7) 00:10:19.845 44848.733 - 45059.290: 99.9927% ( 8) 00:10:19.845 45059.290 - 45269.847: 100.0000% ( 1) 00:10:19.845 00:10:19.845 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:19.845 ============================================================================== 00:10:19.845 Range in us Cumulative IO count 00:10:19.845 8053.822 - 8106.461: 0.1314% ( 18) 00:10:19.845 8106.461 - 8159.100: 0.2117% ( 11) 00:10:19.845 8159.100 - 8211.740: 0.4673% ( 35) 00:10:19.845 8211.740 - 8264.379: 0.8908% ( 58) 00:10:19.845 8264.379 - 8317.018: 2.0225% ( 155) 00:10:19.845 8317.018 - 8369.658: 3.5266% ( 206) 00:10:19.845 8369.658 - 8422.297: 5.3008% ( 243) 00:10:19.845 8422.297 - 8474.937: 7.5862% ( 313) 00:10:19.845 8474.937 - 8527.576: 10.2512% ( 365) 00:10:19.845 8527.576 - 8580.215: 13.4565% ( 439) 00:10:19.845 8580.215 - 8632.855: 17.2532% ( 520) 00:10:19.845 8632.855 - 8685.494: 21.3055% ( 555) 00:10:19.845 8685.494 - 8738.133: 25.8105% ( 617) 00:10:19.845 8738.133 - 8790.773: 30.7462% ( 676) 00:10:19.845 8790.773 - 8843.412: 35.8061% ( 693) 00:10:19.845 8843.412 - 8896.051: 40.9025% ( 698) 00:10:19.845 8896.051 - 8948.691: 46.0718% ( 708) 00:10:19.845 8948.691 - 9001.330: 51.4311% ( 734) 00:10:19.845 9001.330 - 9053.969: 56.6151% ( 710) 00:10:19.845 9053.969 - 9106.609: 61.5581% ( 677) 00:10:19.845 9106.609 - 9159.248: 66.0558% ( 616) 00:10:19.845 9159.248 - 9211.888: 70.1738% ( 564) 00:10:19.845 9211.888 - 9264.527: 74.0946% ( 537) 00:10:19.845 9264.527 - 9317.166: 77.7307% ( 498) 00:10:19.845 9317.166 - 9369.806: 80.8557% ( 428) 00:10:19.845 9369.806 - 9422.445: 83.4477% ( 355) 00:10:19.845 9422.445 - 9475.084: 85.6600% ( 303) 00:10:19.845 9475.084 - 9527.724: 87.4197% ( 241) 00:10:19.845 9527.724 - 9580.363: 88.8362% ( 194) 00:10:19.845 9580.363 - 9633.002: 89.9314% ( 150) 00:10:19.845 9633.002 - 9685.642: 90.7637% ( 114) 00:10:19.845 9685.642 - 9738.281: 91.3843% ( 85) 00:10:19.845 9738.281 - 9790.920: 91.9466% ( 77) 00:10:19.845 9790.920 - 9843.560: 92.4504% ( 69) 00:10:19.845 9843.560 - 9896.199: 92.8811% ( 59) 00:10:19.845 9896.199 - 9948.839: 93.2389% ( 49) 00:10:19.845 9948.839 - 10001.478: 93.6040% ( 50) 00:10:19.845 10001.478 - 10054.117: 93.9325% ( 45) 00:10:19.845 10054.117 - 10106.757: 94.2392% ( 42) 00:10:19.845 10106.757 - 10159.396: 94.6189% ( 52) 00:10:19.845 10159.396 - 10212.035: 94.9328% ( 43) 00:10:19.846 10212.035 - 10264.675: 95.2322% ( 41) 00:10:19.846 10264.675 - 10317.314: 95.5242% ( 40) 00:10:19.846 10317.314 - 10369.953: 95.7871% ( 36) 00:10:19.846 10369.953 - 10422.593: 96.0280% ( 33) 00:10:19.846 10422.593 - 10475.232: 96.2398% ( 29) 00:10:19.846 10475.232 - 10527.871: 96.4223% ( 25) 00:10:19.846 10527.871 - 10580.511: 96.5902% ( 23) 00:10:19.846 10580.511 - 10633.150: 96.7801% ( 26) 00:10:19.846 10633.150 - 10685.790: 96.9480% ( 23) 00:10:19.846 10685.790 - 10738.429: 97.1305% ( 25) 00:10:19.846 10738.429 - 10791.068: 97.2985% ( 23) 00:10:19.846 10791.068 - 10843.708: 97.4518% ( 21) 00:10:19.846 10843.708 - 10896.347: 97.6051% ( 21) 00:10:19.846 10896.347 - 10948.986: 97.7512% ( 20) 00:10:19.846 10948.986 - 11001.626: 97.8680% ( 16) 00:10:19.846 11001.626 - 11054.265: 97.9483% ( 11) 00:10:19.846 11054.265 - 11106.904: 98.0505% ( 14) 00:10:19.846 11106.904 - 11159.544: 98.1235% ( 10) 00:10:19.846 11159.544 - 11212.183: 98.1820% ( 8) 00:10:19.846 11212.183 - 11264.822: 98.2185% ( 5) 00:10:19.846 11264.822 - 11317.462: 98.2623% ( 6) 00:10:19.846 11317.462 - 11370.101: 98.2988% ( 5) 00:10:19.846 11370.101 - 11422.741: 98.3280% ( 4) 00:10:19.846 11422.741 - 11475.380: 98.3572% ( 4) 00:10:19.846 11475.380 - 11528.019: 98.3791% ( 3) 00:10:19.846 11528.019 - 11580.659: 98.4083% ( 4) 00:10:19.846 11580.659 - 11633.298: 98.4302% ( 3) 00:10:19.846 11633.298 - 11685.937: 98.4594% ( 4) 00:10:19.846 11685.937 - 11738.577: 98.4886% ( 4) 00:10:19.846 11738.577 - 11791.216: 98.5105% ( 3) 00:10:19.846 11791.216 - 11843.855: 98.5397% ( 4) 00:10:19.846 11843.855 - 11896.495: 98.5689% ( 4) 00:10:19.846 11896.495 - 11949.134: 98.5981% ( 4) 00:10:19.846 12107.052 - 12159.692: 98.6200% ( 3) 00:10:19.846 12159.692 - 12212.331: 98.6273% ( 1) 00:10:19.846 12212.331 - 12264.970: 98.6565% ( 4) 00:10:19.846 12264.970 - 12317.610: 98.6930% ( 5) 00:10:19.846 12317.610 - 12370.249: 98.7077% ( 2) 00:10:19.846 12422.888 - 12475.528: 98.7223% ( 2) 00:10:19.846 12475.528 - 12528.167: 98.7369% ( 2) 00:10:19.846 12528.167 - 12580.806: 98.7588% ( 3) 00:10:19.846 12580.806 - 12633.446: 98.7734% ( 2) 00:10:19.846 12633.446 - 12686.085: 98.7953% ( 3) 00:10:19.846 12686.085 - 12738.724: 98.8026% ( 1) 00:10:19.846 12738.724 - 12791.364: 98.8172% ( 2) 00:10:19.846 12791.364 - 12844.003: 98.8318% ( 2) 00:10:19.846 12844.003 - 12896.643: 98.8464% ( 2) 00:10:19.846 12896.643 - 12949.282: 98.8610% ( 2) 00:10:19.846 12949.282 - 13001.921: 98.8756% ( 2) 00:10:19.846 13001.921 - 13054.561: 98.8975% ( 3) 00:10:19.846 13054.561 - 13107.200: 98.9121% ( 2) 00:10:19.846 13107.200 - 13159.839: 98.9340% ( 3) 00:10:19.846 13159.839 - 13212.479: 98.9486% ( 2) 00:10:19.846 13212.479 - 13265.118: 98.9632% ( 2) 00:10:19.846 13265.118 - 13317.757: 98.9778% ( 2) 00:10:19.846 13317.757 - 13370.397: 98.9997% ( 3) 00:10:19.846 13370.397 - 13423.036: 99.0143% ( 2) 00:10:19.846 13423.036 - 13475.676: 99.0289% ( 2) 00:10:19.846 13475.676 - 13580.954: 99.0654% ( 5) 00:10:19.846 34952.533 - 35163.091: 99.0946% ( 4) 00:10:19.846 35163.091 - 35373.648: 99.1457% ( 7) 00:10:19.846 35373.648 - 35584.206: 99.1968% ( 7) 00:10:19.846 35584.206 - 35794.763: 99.2553% ( 8) 00:10:19.846 35794.763 - 36005.320: 99.3137% ( 8) 00:10:19.846 36005.320 - 36215.878: 99.3648% ( 7) 00:10:19.846 36215.878 - 36426.435: 99.4232% ( 8) 00:10:19.846 36426.435 - 36636.993: 99.4816% ( 8) 00:10:19.846 36636.993 - 36847.550: 99.5327% ( 7) 00:10:19.846 41690.371 - 41900.929: 99.5546% ( 3) 00:10:19.846 41900.929 - 42111.486: 99.6057% ( 7) 00:10:19.846 42111.486 - 42322.043: 99.6568% ( 7) 00:10:19.846 42322.043 - 42532.601: 99.7152% ( 8) 00:10:19.846 42532.601 - 42743.158: 99.7664% ( 7) 00:10:19.846 42743.158 - 42953.716: 99.8175% ( 7) 00:10:19.846 42953.716 - 43164.273: 99.8759% ( 8) 00:10:19.846 43164.273 - 43374.831: 99.9343% ( 8) 00:10:19.846 43374.831 - 43585.388: 99.9854% ( 7) 00:10:19.846 43585.388 - 43795.945: 100.0000% ( 2) 00:10:19.846 00:10:19.846 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:19.846 ============================================================================== 00:10:19.846 Range in us Cumulative IO count 00:10:19.846 8053.822 - 8106.461: 0.0365% ( 5) 00:10:19.846 8106.461 - 8159.100: 0.1460% ( 15) 00:10:19.846 8159.100 - 8211.740: 0.3213% ( 24) 00:10:19.846 8211.740 - 8264.379: 0.8178% ( 68) 00:10:19.846 8264.379 - 8317.018: 1.7450% ( 127) 00:10:19.846 8317.018 - 8369.658: 3.3221% ( 216) 00:10:19.846 8369.658 - 8422.297: 5.1840% ( 255) 00:10:19.846 8422.297 - 8474.937: 7.3087% ( 291) 00:10:19.846 8474.937 - 8527.576: 10.0029% ( 369) 00:10:19.846 8527.576 - 8580.215: 13.2447% ( 444) 00:10:19.846 8580.215 - 8632.855: 17.0999% ( 528) 00:10:19.846 8632.855 - 8685.494: 21.1960% ( 561) 00:10:19.846 8685.494 - 8738.133: 25.7886% ( 629) 00:10:19.846 8738.133 - 8790.773: 30.6367% ( 664) 00:10:19.846 8790.773 - 8843.412: 35.5943% ( 679) 00:10:19.846 8843.412 - 8896.051: 40.6761% ( 696) 00:10:19.846 8896.051 - 8948.691: 45.9404% ( 721) 00:10:19.846 8948.691 - 9001.330: 51.2631% ( 729) 00:10:19.846 9001.330 - 9053.969: 56.5129% ( 719) 00:10:19.846 9053.969 - 9106.609: 61.5873% ( 695) 00:10:19.846 9106.609 - 9159.248: 66.2018% ( 632) 00:10:19.846 9159.248 - 9211.888: 70.3271% ( 565) 00:10:19.846 9211.888 - 9264.527: 74.2334% ( 535) 00:10:19.846 9264.527 - 9317.166: 77.8914% ( 501) 00:10:19.846 9317.166 - 9369.806: 80.8411% ( 404) 00:10:19.846 9369.806 - 9422.445: 83.3601% ( 345) 00:10:19.846 9422.445 - 9475.084: 85.6162% ( 309) 00:10:19.846 9475.084 - 9527.724: 87.4051% ( 245) 00:10:19.846 9527.724 - 9580.363: 88.7412% ( 183) 00:10:19.846 9580.363 - 9633.002: 89.7780% ( 142) 00:10:19.846 9633.002 - 9685.642: 90.6469% ( 119) 00:10:19.846 9685.642 - 9738.281: 91.3843% ( 101) 00:10:19.846 9738.281 - 9790.920: 92.0561% ( 92) 00:10:19.846 9790.920 - 9843.560: 92.5818% ( 72) 00:10:19.846 9843.560 - 9896.199: 93.0710% ( 67) 00:10:19.846 9896.199 - 9948.839: 93.4287% ( 49) 00:10:19.846 9948.839 - 10001.478: 93.7208% ( 40) 00:10:19.846 10001.478 - 10054.117: 94.0786% ( 49) 00:10:19.846 10054.117 - 10106.757: 94.3998% ( 44) 00:10:19.846 10106.757 - 10159.396: 94.6846% ( 39) 00:10:19.846 10159.396 - 10212.035: 94.9839% ( 41) 00:10:19.846 10212.035 - 10264.675: 95.2906% ( 42) 00:10:19.846 10264.675 - 10317.314: 95.5827% ( 40) 00:10:19.846 10317.314 - 10369.953: 95.7871% ( 28) 00:10:19.846 10369.953 - 10422.593: 96.0207% ( 32) 00:10:19.846 10422.593 - 10475.232: 96.2690% ( 34) 00:10:19.846 10475.232 - 10527.871: 96.4661% ( 27) 00:10:19.846 10527.871 - 10580.511: 96.6341% ( 23) 00:10:19.846 10580.511 - 10633.150: 96.8166% ( 25) 00:10:19.846 10633.150 - 10685.790: 96.9772% ( 22) 00:10:19.846 10685.790 - 10738.429: 97.1598% ( 25) 00:10:19.846 10738.429 - 10791.068: 97.3496% ( 26) 00:10:19.846 10791.068 - 10843.708: 97.5029% ( 21) 00:10:19.846 10843.708 - 10896.347: 97.5686% ( 9) 00:10:19.846 10896.347 - 10948.986: 97.6343% ( 9) 00:10:19.846 10948.986 - 11001.626: 97.7293% ( 13) 00:10:19.846 11001.626 - 11054.265: 97.8242% ( 13) 00:10:19.846 11054.265 - 11106.904: 97.9118% ( 12) 00:10:19.846 11106.904 - 11159.544: 97.9921% ( 11) 00:10:19.846 11159.544 - 11212.183: 98.0651% ( 10) 00:10:19.846 11212.183 - 11264.822: 98.1235% ( 8) 00:10:19.846 11264.822 - 11317.462: 98.1673% ( 6) 00:10:19.846 11317.462 - 11370.101: 98.2039% ( 5) 00:10:19.846 11370.101 - 11422.741: 98.2404% ( 5) 00:10:19.846 11422.741 - 11475.380: 98.2842% ( 6) 00:10:19.846 11475.380 - 11528.019: 98.3280% ( 6) 00:10:19.846 11528.019 - 11580.659: 98.3572% ( 4) 00:10:19.846 11580.659 - 11633.298: 98.4010% ( 6) 00:10:19.846 11633.298 - 11685.937: 98.4448% ( 6) 00:10:19.846 11685.937 - 11738.577: 98.4813% ( 5) 00:10:19.846 11738.577 - 11791.216: 98.5251% ( 6) 00:10:19.846 11791.216 - 11843.855: 98.5543% ( 4) 00:10:19.846 11843.855 - 11896.495: 98.5908% ( 5) 00:10:19.846 11896.495 - 11949.134: 98.5981% ( 1) 00:10:19.846 12422.888 - 12475.528: 98.6273% ( 4) 00:10:19.846 12475.528 - 12528.167: 98.6419% ( 2) 00:10:19.846 12528.167 - 12580.806: 98.6492% ( 1) 00:10:19.846 12580.806 - 12633.446: 98.6638% ( 2) 00:10:19.846 12633.446 - 12686.085: 98.6930% ( 4) 00:10:19.846 12686.085 - 12738.724: 98.7077% ( 2) 00:10:19.846 12738.724 - 12791.364: 98.7223% ( 2) 00:10:19.846 12791.364 - 12844.003: 98.7369% ( 2) 00:10:19.846 12844.003 - 12896.643: 98.7588% ( 3) 00:10:19.846 12896.643 - 12949.282: 98.7734% ( 2) 00:10:19.846 12949.282 - 13001.921: 98.7953% ( 3) 00:10:19.846 13001.921 - 13054.561: 98.8099% ( 2) 00:10:19.846 13054.561 - 13107.200: 98.8318% ( 3) 00:10:19.846 13107.200 - 13159.839: 98.8537% ( 3) 00:10:19.846 13159.839 - 13212.479: 98.8683% ( 2) 00:10:19.846 13212.479 - 13265.118: 98.8829% ( 2) 00:10:19.846 13265.118 - 13317.757: 98.8975% ( 2) 00:10:19.846 13317.757 - 13370.397: 98.9194% ( 3) 00:10:19.846 13370.397 - 13423.036: 98.9340% ( 2) 00:10:19.846 13423.036 - 13475.676: 98.9486% ( 2) 00:10:19.846 13475.676 - 13580.954: 98.9778% ( 4) 00:10:19.846 13580.954 - 13686.233: 99.0143% ( 5) 00:10:19.846 13686.233 - 13791.512: 99.0508% ( 5) 00:10:19.846 13791.512 - 13896.790: 99.0654% ( 2) 00:10:19.846 33268.074 - 33478.631: 99.1019% ( 5) 00:10:19.846 33478.631 - 33689.189: 99.1530% ( 7) 00:10:19.846 33689.189 - 33899.746: 99.2114% ( 8) 00:10:19.846 33899.746 - 34110.304: 99.2772% ( 9) 00:10:19.846 34110.304 - 34320.861: 99.3356% ( 8) 00:10:19.846 34320.861 - 34531.418: 99.3940% ( 8) 00:10:19.846 34531.418 - 34741.976: 99.4451% ( 7) 00:10:19.846 34741.976 - 34952.533: 99.5035% ( 8) 00:10:19.846 34952.533 - 35163.091: 99.5327% ( 4) 00:10:19.846 39795.354 - 40005.912: 99.5546% ( 3) 00:10:19.846 40005.912 - 40216.469: 99.6130% ( 8) 00:10:19.847 40216.469 - 40427.027: 99.6714% ( 8) 00:10:19.847 40427.027 - 40637.584: 99.7298% ( 8) 00:10:19.847 40637.584 - 40848.141: 99.7883% ( 8) 00:10:19.847 40848.141 - 41058.699: 99.8394% ( 7) 00:10:19.847 41058.699 - 41269.256: 99.8978% ( 8) 00:10:19.847 41269.256 - 41479.814: 99.9562% ( 8) 00:10:19.847 41479.814 - 41690.371: 100.0000% ( 6) 00:10:19.847 00:10:19.847 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:19.847 ============================================================================== 00:10:19.847 Range in us Cumulative IO count 00:10:19.847 8001.182 - 8053.822: 0.0219% ( 3) 00:10:19.847 8053.822 - 8106.461: 0.1095% ( 12) 00:10:19.847 8106.461 - 8159.100: 0.1971% ( 12) 00:10:19.847 8159.100 - 8211.740: 0.4381% ( 33) 00:10:19.847 8211.740 - 8264.379: 0.8908% ( 62) 00:10:19.847 8264.379 - 8317.018: 1.9057% ( 139) 00:10:19.847 8317.018 - 8369.658: 3.3586% ( 199) 00:10:19.847 8369.658 - 8422.297: 5.2351% ( 257) 00:10:19.847 8422.297 - 8474.937: 7.2941% ( 282) 00:10:19.847 8474.937 - 8527.576: 10.1343% ( 389) 00:10:19.847 8527.576 - 8580.215: 13.3397% ( 439) 00:10:19.847 8580.215 - 8632.855: 17.1948% ( 528) 00:10:19.847 8632.855 - 8685.494: 21.3055% ( 563) 00:10:19.847 8685.494 - 8738.133: 25.9419% ( 635) 00:10:19.847 8738.133 - 8790.773: 30.7170% ( 654) 00:10:19.847 8790.773 - 8843.412: 35.6966% ( 682) 00:10:19.847 8843.412 - 8896.051: 40.7491% ( 692) 00:10:19.847 8896.051 - 8948.691: 45.8090% ( 693) 00:10:19.847 8948.691 - 9001.330: 51.1098% ( 726) 00:10:19.847 9001.330 - 9053.969: 56.3814% ( 722) 00:10:19.847 9053.969 - 9106.609: 61.3245% ( 677) 00:10:19.847 9106.609 - 9159.248: 65.9098% ( 628) 00:10:19.847 9159.248 - 9211.888: 69.9985% ( 560) 00:10:19.847 9211.888 - 9264.527: 73.8829% ( 532) 00:10:19.847 9264.527 - 9317.166: 77.3730% ( 478) 00:10:19.847 9317.166 - 9369.806: 80.4761% ( 425) 00:10:19.847 9369.806 - 9422.445: 83.1849% ( 371) 00:10:19.847 9422.445 - 9475.084: 85.4410% ( 309) 00:10:19.847 9475.084 - 9527.724: 87.1641% ( 236) 00:10:19.847 9527.724 - 9580.363: 88.5806% ( 194) 00:10:19.847 9580.363 - 9633.002: 89.7342% ( 158) 00:10:19.847 9633.002 - 9685.642: 90.5885% ( 117) 00:10:19.847 9685.642 - 9738.281: 91.3697% ( 107) 00:10:19.847 9738.281 - 9790.920: 92.1218% ( 103) 00:10:19.847 9790.920 - 9843.560: 92.7643% ( 88) 00:10:19.847 9843.560 - 9896.199: 93.3484% ( 80) 00:10:19.847 9896.199 - 9948.839: 93.7865% ( 60) 00:10:19.847 9948.839 - 10001.478: 94.1370% ( 48) 00:10:19.847 10001.478 - 10054.117: 94.4874% ( 48) 00:10:19.847 10054.117 - 10106.757: 94.7868% ( 41) 00:10:19.847 10106.757 - 10159.396: 95.1008% ( 43) 00:10:19.847 10159.396 - 10212.035: 95.3490% ( 34) 00:10:19.847 10212.035 - 10264.675: 95.5900% ( 33) 00:10:19.847 10264.675 - 10317.314: 95.7798% ( 26) 00:10:19.847 10317.314 - 10369.953: 95.9696% ( 26) 00:10:19.847 10369.953 - 10422.593: 96.1230% ( 21) 00:10:19.847 10422.593 - 10475.232: 96.3639% ( 33) 00:10:19.847 10475.232 - 10527.871: 96.5537% ( 26) 00:10:19.847 10527.871 - 10580.511: 96.7290% ( 24) 00:10:19.847 10580.511 - 10633.150: 96.8458% ( 16) 00:10:19.847 10633.150 - 10685.790: 96.9480% ( 14) 00:10:19.847 10685.790 - 10738.429: 97.0356% ( 12) 00:10:19.847 10738.429 - 10791.068: 97.1232% ( 12) 00:10:19.847 10791.068 - 10843.708: 97.2255% ( 14) 00:10:19.847 10843.708 - 10896.347: 97.3204% ( 13) 00:10:19.847 10896.347 - 10948.986: 97.4372% ( 16) 00:10:19.847 10948.986 - 11001.626: 97.5248% ( 12) 00:10:19.847 11001.626 - 11054.265: 97.6197% ( 13) 00:10:19.847 11054.265 - 11106.904: 97.7074% ( 12) 00:10:19.847 11106.904 - 11159.544: 97.8169% ( 15) 00:10:19.847 11159.544 - 11212.183: 97.9191% ( 14) 00:10:19.847 11212.183 - 11264.822: 97.9921% ( 10) 00:10:19.847 11264.822 - 11317.462: 98.0651% ( 10) 00:10:19.847 11317.462 - 11370.101: 98.1016% ( 5) 00:10:19.847 11370.101 - 11422.741: 98.1527% ( 7) 00:10:19.847 11422.741 - 11475.380: 98.1820% ( 4) 00:10:19.847 11475.380 - 11528.019: 98.2258% ( 6) 00:10:19.847 11528.019 - 11580.659: 98.2696% ( 6) 00:10:19.847 11580.659 - 11633.298: 98.3061% ( 5) 00:10:19.847 11633.298 - 11685.937: 98.3499% ( 6) 00:10:19.847 11685.937 - 11738.577: 98.3937% ( 6) 00:10:19.847 11738.577 - 11791.216: 98.4229% ( 4) 00:10:19.847 11791.216 - 11843.855: 98.4375% ( 2) 00:10:19.847 11843.855 - 11896.495: 98.4521% ( 2) 00:10:19.847 11896.495 - 11949.134: 98.4667% ( 2) 00:10:19.847 11949.134 - 12001.773: 98.4740% ( 1) 00:10:19.847 12001.773 - 12054.413: 98.4886% ( 2) 00:10:19.847 12054.413 - 12107.052: 98.5032% ( 2) 00:10:19.847 12107.052 - 12159.692: 98.5178% ( 2) 00:10:19.847 12159.692 - 12212.331: 98.5251% ( 1) 00:10:19.847 12212.331 - 12264.970: 98.5397% ( 2) 00:10:19.847 12264.970 - 12317.610: 98.5543% ( 2) 00:10:19.847 12317.610 - 12370.249: 98.5689% ( 2) 00:10:19.847 12370.249 - 12422.888: 98.5835% ( 2) 00:10:19.847 12422.888 - 12475.528: 98.5981% ( 2) 00:10:19.847 12738.724 - 12791.364: 98.6346% ( 5) 00:10:19.847 12791.364 - 12844.003: 98.6711% ( 5) 00:10:19.847 12844.003 - 12896.643: 98.6784% ( 1) 00:10:19.847 12896.643 - 12949.282: 98.6857% ( 1) 00:10:19.847 12949.282 - 13001.921: 98.7004% ( 2) 00:10:19.847 13001.921 - 13054.561: 98.7150% ( 2) 00:10:19.847 13054.561 - 13107.200: 98.7296% ( 2) 00:10:19.847 13107.200 - 13159.839: 98.7515% ( 3) 00:10:19.847 13159.839 - 13212.479: 98.7661% ( 2) 00:10:19.847 13212.479 - 13265.118: 98.7807% ( 2) 00:10:19.847 13265.118 - 13317.757: 98.8026% ( 3) 00:10:19.847 13317.757 - 13370.397: 98.8172% ( 2) 00:10:19.847 13370.397 - 13423.036: 98.8318% ( 2) 00:10:19.847 13423.036 - 13475.676: 98.8464% ( 2) 00:10:19.847 13475.676 - 13580.954: 98.8829% ( 5) 00:10:19.847 13580.954 - 13686.233: 98.9194% ( 5) 00:10:19.847 13686.233 - 13791.512: 98.9486% ( 4) 00:10:19.847 13791.512 - 13896.790: 98.9851% ( 5) 00:10:19.847 13896.790 - 14002.069: 99.0216% ( 5) 00:10:19.847 14002.069 - 14107.348: 99.0508% ( 4) 00:10:19.847 14107.348 - 14212.627: 99.0654% ( 2) 00:10:19.847 31373.057 - 31583.614: 99.0800% ( 2) 00:10:19.847 31583.614 - 31794.172: 99.1384% ( 8) 00:10:19.847 31794.172 - 32004.729: 99.1968% ( 8) 00:10:19.847 32004.729 - 32215.287: 99.2553% ( 8) 00:10:19.847 32215.287 - 32425.844: 99.3137% ( 8) 00:10:19.847 32425.844 - 32636.402: 99.3648% ( 7) 00:10:19.847 32636.402 - 32846.959: 99.4159% ( 7) 00:10:19.847 32846.959 - 33057.516: 99.4743% ( 8) 00:10:19.847 33057.516 - 33268.074: 99.5327% ( 8) 00:10:19.847 37900.337 - 38110.895: 99.5692% ( 5) 00:10:19.847 38110.895 - 38321.452: 99.6276% ( 8) 00:10:19.847 38321.452 - 38532.010: 99.6714% ( 6) 00:10:19.847 38532.010 - 38742.567: 99.7298% ( 8) 00:10:19.847 38742.567 - 38953.124: 99.7810% ( 7) 00:10:19.847 38953.124 - 39163.682: 99.8175% ( 5) 00:10:19.847 39163.682 - 39374.239: 99.8759% ( 8) 00:10:19.847 39374.239 - 39584.797: 99.9270% ( 7) 00:10:19.847 39584.797 - 39795.354: 99.9854% ( 8) 00:10:19.847 39795.354 - 40005.912: 100.0000% ( 2) 00:10:19.847 00:10:19.847 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:19.847 ============================================================================== 00:10:19.847 Range in us Cumulative IO count 00:10:19.847 8001.182 - 8053.822: 0.0581% ( 8) 00:10:19.847 8053.822 - 8106.461: 0.0945% ( 5) 00:10:19.847 8106.461 - 8159.100: 0.1672% ( 10) 00:10:19.847 8159.100 - 8211.740: 0.3561% ( 26) 00:10:19.847 8211.740 - 8264.379: 0.8721% ( 71) 00:10:19.847 8264.379 - 8317.018: 1.8750% ( 138) 00:10:19.847 8317.018 - 8369.658: 3.2994% ( 196) 00:10:19.847 8369.658 - 8422.297: 5.1453% ( 254) 00:10:19.847 8422.297 - 8474.937: 7.3692% ( 306) 00:10:19.847 8474.937 - 8527.576: 9.9419% ( 354) 00:10:19.847 8527.576 - 8580.215: 13.0087% ( 422) 00:10:19.847 8580.215 - 8632.855: 16.8023% ( 522) 00:10:19.847 8632.855 - 8685.494: 20.9375% ( 569) 00:10:19.847 8685.494 - 8738.133: 25.3997% ( 614) 00:10:19.847 8738.133 - 8790.773: 30.2616% ( 669) 00:10:19.847 8790.773 - 8843.412: 35.2035% ( 680) 00:10:19.847 8843.412 - 8896.051: 40.2689% ( 697) 00:10:19.847 8896.051 - 8948.691: 45.5233% ( 723) 00:10:19.847 8948.691 - 9001.330: 50.7267% ( 716) 00:10:19.847 9001.330 - 9053.969: 55.8212% ( 701) 00:10:19.847 9053.969 - 9106.609: 60.6759% ( 668) 00:10:19.847 9106.609 - 9159.248: 65.3125% ( 638) 00:10:19.847 9159.248 - 9211.888: 69.4913% ( 575) 00:10:19.847 9211.888 - 9264.527: 73.3648% ( 533) 00:10:19.847 9264.527 - 9317.166: 77.0349% ( 505) 00:10:19.847 9317.166 - 9369.806: 80.1599% ( 430) 00:10:19.847 9369.806 - 9422.445: 82.8270% ( 367) 00:10:19.847 9422.445 - 9475.084: 85.0799% ( 310) 00:10:19.847 9475.084 - 9527.724: 86.8750% ( 247) 00:10:19.847 9527.724 - 9580.363: 88.3140% ( 198) 00:10:19.847 9580.363 - 9633.002: 89.4477% ( 156) 00:10:19.847 9633.002 - 9685.642: 90.3198% ( 120) 00:10:19.847 9685.642 - 9738.281: 91.0901% ( 106) 00:10:19.847 9738.281 - 9790.920: 91.8023% ( 98) 00:10:19.847 9790.920 - 9843.560: 92.4346% ( 87) 00:10:19.847 9843.560 - 9896.199: 93.0087% ( 79) 00:10:19.847 9896.199 - 9948.839: 93.5392% ( 73) 00:10:19.847 9948.839 - 10001.478: 94.0116% ( 65) 00:10:19.847 10001.478 - 10054.117: 94.4041% ( 54) 00:10:19.847 10054.117 - 10106.757: 94.7238% ( 44) 00:10:19.847 10106.757 - 10159.396: 94.9782% ( 35) 00:10:19.848 10159.396 - 10212.035: 95.2326% ( 35) 00:10:19.848 10212.035 - 10264.675: 95.4578% ( 31) 00:10:19.848 10264.675 - 10317.314: 95.6831% ( 31) 00:10:19.848 10317.314 - 10369.953: 95.8430% ( 22) 00:10:19.848 10369.953 - 10422.593: 95.9811% ( 19) 00:10:19.848 10422.593 - 10475.232: 96.1555% ( 24) 00:10:19.848 10475.232 - 10527.871: 96.3081% ( 21) 00:10:19.848 10527.871 - 10580.511: 96.4026% ( 13) 00:10:19.848 10580.511 - 10633.150: 96.4535% ( 7) 00:10:19.848 10633.150 - 10685.790: 96.5116% ( 8) 00:10:19.848 10685.790 - 10738.429: 96.5988% ( 12) 00:10:19.848 10738.429 - 10791.068: 96.6788% ( 11) 00:10:19.848 10791.068 - 10843.708: 96.7660% ( 12) 00:10:19.848 10843.708 - 10896.347: 96.8823% ( 16) 00:10:19.848 10896.347 - 10948.986: 97.0131% ( 18) 00:10:19.848 10948.986 - 11001.626: 97.1366% ( 17) 00:10:19.848 11001.626 - 11054.265: 97.2311% ( 13) 00:10:19.848 11054.265 - 11106.904: 97.3474% ( 16) 00:10:19.848 11106.904 - 11159.544: 97.4346% ( 12) 00:10:19.848 11159.544 - 11212.183: 97.5509% ( 16) 00:10:19.848 11212.183 - 11264.822: 97.6672% ( 16) 00:10:19.848 11264.822 - 11317.462: 97.8052% ( 19) 00:10:19.848 11317.462 - 11370.101: 97.8997% ( 13) 00:10:19.848 11370.101 - 11422.741: 97.9869% ( 12) 00:10:19.848 11422.741 - 11475.380: 98.0596% ( 10) 00:10:19.848 11475.380 - 11528.019: 98.1323% ( 10) 00:10:19.848 11528.019 - 11580.659: 98.1904% ( 8) 00:10:19.848 11580.659 - 11633.298: 98.2413% ( 7) 00:10:19.848 11633.298 - 11685.937: 98.2631% ( 3) 00:10:19.848 11685.937 - 11738.577: 98.2776% ( 2) 00:10:19.848 11738.577 - 11791.216: 98.2849% ( 1) 00:10:19.848 11791.216 - 11843.855: 98.2994% ( 2) 00:10:19.848 11843.855 - 11896.495: 98.3140% ( 2) 00:10:19.848 11896.495 - 11949.134: 98.3285% ( 2) 00:10:19.848 11949.134 - 12001.773: 98.3358% ( 1) 00:10:19.848 12001.773 - 12054.413: 98.3503% ( 2) 00:10:19.848 12054.413 - 12107.052: 98.3648% ( 2) 00:10:19.848 12107.052 - 12159.692: 98.3794% ( 2) 00:10:19.848 12159.692 - 12212.331: 98.3939% ( 2) 00:10:19.848 12212.331 - 12264.970: 98.4084% ( 2) 00:10:19.848 12264.970 - 12317.610: 98.4230% ( 2) 00:10:19.848 12317.610 - 12370.249: 98.4375% ( 2) 00:10:19.848 12370.249 - 12422.888: 98.4520% ( 2) 00:10:19.848 12422.888 - 12475.528: 98.4666% ( 2) 00:10:19.848 12475.528 - 12528.167: 98.4811% ( 2) 00:10:19.848 12528.167 - 12580.806: 98.4956% ( 2) 00:10:19.848 12580.806 - 12633.446: 98.5102% ( 2) 00:10:19.848 12633.446 - 12686.085: 98.5247% ( 2) 00:10:19.848 12686.085 - 12738.724: 98.5392% ( 2) 00:10:19.848 12738.724 - 12791.364: 98.5538% ( 2) 00:10:19.848 12791.364 - 12844.003: 98.5683% ( 2) 00:10:19.848 12844.003 - 12896.643: 98.5828% ( 2) 00:10:19.848 12896.643 - 12949.282: 98.5974% ( 2) 00:10:19.848 12949.282 - 13001.921: 98.6047% ( 1) 00:10:19.848 13001.921 - 13054.561: 98.6410% ( 5) 00:10:19.848 13054.561 - 13107.200: 98.6555% ( 2) 00:10:19.848 13107.200 - 13159.839: 98.6628% ( 1) 00:10:19.848 13159.839 - 13212.479: 98.6846% ( 3) 00:10:19.848 13212.479 - 13265.118: 98.6991% ( 2) 00:10:19.848 13265.118 - 13317.757: 98.7209% ( 3) 00:10:19.848 13317.757 - 13370.397: 98.7355% ( 2) 00:10:19.848 13370.397 - 13423.036: 98.7500% ( 2) 00:10:19.848 13423.036 - 13475.676: 98.7718% ( 3) 00:10:19.848 13475.676 - 13580.954: 98.8081% ( 5) 00:10:19.848 13580.954 - 13686.233: 98.8445% ( 5) 00:10:19.848 13686.233 - 13791.512: 98.8808% ( 5) 00:10:19.848 13791.512 - 13896.790: 98.9172% ( 5) 00:10:19.848 13896.790 - 14002.069: 98.9535% ( 5) 00:10:19.848 14002.069 - 14107.348: 98.9898% ( 5) 00:10:19.848 14107.348 - 14212.627: 99.0189% ( 4) 00:10:19.848 14212.627 - 14317.905: 99.0552% ( 5) 00:10:19.848 14317.905 - 14423.184: 99.0698% ( 2) 00:10:19.848 23582.432 - 23687.711: 99.0988% ( 4) 00:10:19.848 23687.711 - 23792.990: 99.1206% ( 3) 00:10:19.848 23792.990 - 23898.268: 99.1424% ( 3) 00:10:19.848 23898.268 - 24003.547: 99.1715% ( 4) 00:10:19.848 24003.547 - 24108.826: 99.2006% ( 4) 00:10:19.848 24108.826 - 24214.104: 99.2297% ( 4) 00:10:19.848 24214.104 - 24319.383: 99.2587% ( 4) 00:10:19.848 24319.383 - 24424.662: 99.2878% ( 4) 00:10:19.848 24424.662 - 24529.941: 99.3169% ( 4) 00:10:19.848 24529.941 - 24635.219: 99.3459% ( 4) 00:10:19.848 24635.219 - 24740.498: 99.3677% ( 3) 00:10:19.848 24740.498 - 24845.777: 99.3968% ( 4) 00:10:19.848 24845.777 - 24951.055: 99.4259% ( 4) 00:10:19.848 24951.055 - 25056.334: 99.4549% ( 4) 00:10:19.848 25056.334 - 25161.613: 99.4840% ( 4) 00:10:19.848 25161.613 - 25266.892: 99.5131% ( 4) 00:10:19.848 25266.892 - 25372.170: 99.5349% ( 3) 00:10:19.848 30951.942 - 31162.500: 99.5785% ( 6) 00:10:19.848 31162.500 - 31373.057: 99.6439% ( 9) 00:10:19.848 31373.057 - 31583.614: 99.6948% ( 7) 00:10:19.848 31583.614 - 31794.172: 99.7529% ( 8) 00:10:19.848 31794.172 - 32004.729: 99.8110% ( 8) 00:10:19.848 32004.729 - 32215.287: 99.8619% ( 7) 00:10:19.848 32215.287 - 32425.844: 99.9201% ( 8) 00:10:19.848 32425.844 - 32636.402: 99.9782% ( 8) 00:10:19.848 32636.402 - 32846.959: 100.0000% ( 3) 00:10:19.848 00:10:19.848 16:04:38 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:21.268 Initializing NVMe Controllers 00:10:21.268 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:21.268 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:21.268 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:21.268 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:21.268 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:21.268 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:21.268 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:21.268 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:21.268 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:21.268 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:21.268 Initialization complete. Launching workers. 00:10:21.268 ======================================================== 00:10:21.268 Latency(us) 00:10:21.268 Device Information : IOPS MiB/s Average min max 00:10:21.268 PCIE (0000:00:10.0) NSID 1 from core 0: 12307.51 144.23 10430.87 7941.56 40564.18 00:10:21.268 PCIE (0000:00:11.0) NSID 1 from core 0: 12307.51 144.23 10416.21 8253.04 38754.96 00:10:21.268 PCIE (0000:00:13.0) NSID 1 from core 0: 12307.51 144.23 10401.40 8128.39 38268.46 00:10:21.268 PCIE (0000:00:12.0) NSID 1 from core 0: 12307.51 144.23 10386.44 8019.59 36368.97 00:10:21.268 PCIE (0000:00:12.0) NSID 2 from core 0: 12307.51 144.23 10371.38 8054.69 35040.25 00:10:21.268 PCIE (0000:00:12.0) NSID 3 from core 0: 12371.28 144.98 10303.43 8142.49 26472.18 00:10:21.268 ======================================================== 00:10:21.268 Total : 73908.82 866.12 10384.88 7941.56 40564.18 00:10:21.268 00:10:21.268 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:21.268 ================================================================================= 00:10:21.268 1.00000% : 8527.576us 00:10:21.268 10.00000% : 9001.330us 00:10:21.268 25.00000% : 9317.166us 00:10:21.268 50.00000% : 9685.642us 00:10:21.268 75.00000% : 10317.314us 00:10:21.268 90.00000% : 12475.528us 00:10:21.268 95.00000% : 14317.905us 00:10:21.268 98.00000% : 19055.447us 00:10:21.268 99.00000% : 31162.500us 00:10:21.268 99.50000% : 38953.124us 00:10:21.268 99.90000% : 40216.469us 00:10:21.268 99.99000% : 40637.584us 00:10:21.268 99.99900% : 40637.584us 00:10:21.268 99.99990% : 40637.584us 00:10:21.268 99.99999% : 40637.584us 00:10:21.268 00:10:21.268 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:21.268 ================================================================================= 00:10:21.268 1.00000% : 8685.494us 00:10:21.268 10.00000% : 9053.969us 00:10:21.268 25.00000% : 9317.166us 00:10:21.268 50.00000% : 9685.642us 00:10:21.269 75.00000% : 10369.953us 00:10:21.269 90.00000% : 12317.610us 00:10:21.269 95.00000% : 14212.627us 00:10:21.269 98.00000% : 19266.005us 00:10:21.269 99.00000% : 29478.040us 00:10:21.269 99.50000% : 37268.665us 00:10:21.269 99.90000% : 38532.010us 00:10:21.269 99.99000% : 38742.567us 00:10:21.269 99.99900% : 38953.124us 00:10:21.269 99.99990% : 38953.124us 00:10:21.269 99.99999% : 38953.124us 00:10:21.269 00:10:21.269 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:21.269 ================================================================================= 00:10:21.269 1.00000% : 8580.215us 00:10:21.269 10.00000% : 9001.330us 00:10:21.269 25.00000% : 9317.166us 00:10:21.269 50.00000% : 9738.281us 00:10:21.269 75.00000% : 10317.314us 00:10:21.269 90.00000% : 12212.331us 00:10:21.269 95.00000% : 13791.512us 00:10:21.269 98.00000% : 19897.677us 00:10:21.269 99.00000% : 29056.925us 00:10:21.269 99.50000% : 36847.550us 00:10:21.269 99.90000% : 38110.895us 00:10:21.269 99.99000% : 38321.452us 00:10:21.269 99.99900% : 38321.452us 00:10:21.269 99.99990% : 38321.452us 00:10:21.269 99.99999% : 38321.452us 00:10:21.269 00:10:21.269 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:21.269 ================================================================================= 00:10:21.269 1.00000% : 8580.215us 00:10:21.269 10.00000% : 9053.969us 00:10:21.269 25.00000% : 9317.166us 00:10:21.269 50.00000% : 9685.642us 00:10:21.269 75.00000% : 10369.953us 00:10:21.269 90.00000% : 12317.610us 00:10:21.269 95.00000% : 13896.790us 00:10:21.269 98.00000% : 19687.120us 00:10:21.269 99.00000% : 27161.908us 00:10:21.269 99.50000% : 34741.976us 00:10:21.269 99.90000% : 36215.878us 00:10:21.269 99.99000% : 36426.435us 00:10:21.269 99.99900% : 36426.435us 00:10:21.269 99.99990% : 36426.435us 00:10:21.269 99.99999% : 36426.435us 00:10:21.269 00:10:21.269 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:21.269 ================================================================================= 00:10:21.269 1.00000% : 8527.576us 00:10:21.269 10.00000% : 9053.969us 00:10:21.269 25.00000% : 9317.166us 00:10:21.269 50.00000% : 9685.642us 00:10:21.269 75.00000% : 10317.314us 00:10:21.269 90.00000% : 12422.888us 00:10:21.269 95.00000% : 14002.069us 00:10:21.269 98.00000% : 19687.120us 00:10:21.269 99.00000% : 25372.170us 00:10:21.269 99.50000% : 33478.631us 00:10:21.269 99.90000% : 34741.976us 00:10:21.269 99.99000% : 35163.091us 00:10:21.269 99.99900% : 35163.091us 00:10:21.269 99.99990% : 35163.091us 00:10:21.269 99.99999% : 35163.091us 00:10:21.269 00:10:21.269 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:21.269 ================================================================================= 00:10:21.269 1.00000% : 8580.215us 00:10:21.269 10.00000% : 9053.969us 00:10:21.269 25.00000% : 9317.166us 00:10:21.269 50.00000% : 9685.642us 00:10:21.269 75.00000% : 10317.314us 00:10:21.269 90.00000% : 12475.528us 00:10:21.269 95.00000% : 14212.627us 00:10:21.269 98.00000% : 18002.660us 00:10:21.269 99.00000% : 20108.235us 00:10:21.269 99.50000% : 24845.777us 00:10:21.269 99.90000% : 26214.400us 00:10:21.269 99.99000% : 26530.236us 00:10:21.269 99.99900% : 26530.236us 00:10:21.269 99.99990% : 26530.236us 00:10:21.269 99.99999% : 26530.236us 00:10:21.269 00:10:21.269 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:21.269 ============================================================================== 00:10:21.269 Range in us Cumulative IO count 00:10:21.269 7895.904 - 7948.543: 0.0081% ( 1) 00:10:21.269 7948.543 - 8001.182: 0.0405% ( 4) 00:10:21.269 8001.182 - 8053.822: 0.0648% ( 3) 00:10:21.269 8053.822 - 8106.461: 0.0891% ( 3) 00:10:21.269 8106.461 - 8159.100: 0.0972% ( 1) 00:10:21.269 8159.100 - 8211.740: 0.1052% ( 1) 00:10:21.269 8211.740 - 8264.379: 0.2267% ( 15) 00:10:21.269 8264.379 - 8317.018: 0.2834% ( 7) 00:10:21.269 8317.018 - 8369.658: 0.3805% ( 12) 00:10:21.269 8369.658 - 8422.297: 0.7448% ( 45) 00:10:21.269 8422.297 - 8474.937: 0.9796% ( 29) 00:10:21.269 8474.937 - 8527.576: 1.2387% ( 32) 00:10:21.269 8527.576 - 8580.215: 1.6758% ( 54) 00:10:21.269 8580.215 - 8632.855: 2.0078% ( 41) 00:10:21.269 8632.855 - 8685.494: 2.4045% ( 49) 00:10:21.269 8685.494 - 8738.133: 3.0117% ( 75) 00:10:21.269 8738.133 - 8790.773: 3.9670% ( 118) 00:10:21.269 8790.773 - 8843.412: 5.0194% ( 130) 00:10:21.269 8843.412 - 8896.051: 6.7519% ( 214) 00:10:21.269 8896.051 - 8948.691: 8.9135% ( 267) 00:10:21.269 8948.691 - 9001.330: 10.7837% ( 231) 00:10:21.269 9001.330 - 9053.969: 12.7753% ( 246) 00:10:21.269 9053.969 - 9106.609: 15.1878% ( 298) 00:10:21.269 9106.609 - 9159.248: 17.7866% ( 321) 00:10:21.269 9159.248 - 9211.888: 20.6201% ( 350) 00:10:21.269 9211.888 - 9264.527: 23.9880% ( 416) 00:10:21.269 9264.527 - 9317.166: 27.5907% ( 445) 00:10:21.269 9317.166 - 9369.806: 30.3109% ( 336) 00:10:21.269 9369.806 - 9422.445: 33.0878% ( 343) 00:10:21.269 9422.445 - 9475.084: 36.1399% ( 377) 00:10:21.269 9475.084 - 9527.724: 40.0340% ( 481) 00:10:21.269 9527.724 - 9580.363: 43.7581% ( 460) 00:10:21.269 9580.363 - 9633.002: 47.1260% ( 416) 00:10:21.269 9633.002 - 9685.642: 51.1010% ( 491) 00:10:21.269 9685.642 - 9738.281: 54.3394% ( 400) 00:10:21.269 9738.281 - 9790.920: 57.2053% ( 354) 00:10:21.269 9790.920 - 9843.560: 60.1522% ( 364) 00:10:21.269 9843.560 - 9896.199: 62.2733% ( 262) 00:10:21.269 9896.199 - 9948.839: 64.1435% ( 231) 00:10:21.269 9948.839 - 10001.478: 66.0703% ( 238) 00:10:21.269 10001.478 - 10054.117: 67.8999% ( 226) 00:10:21.269 10054.117 - 10106.757: 69.7134% ( 224) 00:10:21.269 10106.757 - 10159.396: 71.3245% ( 199) 00:10:21.269 10159.396 - 10212.035: 73.1137% ( 221) 00:10:21.269 10212.035 - 10264.675: 74.5304% ( 175) 00:10:21.269 10264.675 - 10317.314: 75.4615% ( 115) 00:10:21.269 10317.314 - 10369.953: 76.3115% ( 105) 00:10:21.269 10369.953 - 10422.593: 77.3316% ( 126) 00:10:21.269 10422.593 - 10475.232: 78.0845% ( 93) 00:10:21.269 10475.232 - 10527.871: 78.8051% ( 89) 00:10:21.269 10527.871 - 10580.511: 79.5661% ( 94) 00:10:21.269 10580.511 - 10633.150: 80.3433% ( 96) 00:10:21.269 10633.150 - 10685.790: 81.0233% ( 84) 00:10:21.269 10685.790 - 10738.429: 81.6548% ( 78) 00:10:21.269 10738.429 - 10791.068: 82.2701% ( 76) 00:10:21.269 10791.068 - 10843.708: 83.0716% ( 99) 00:10:21.269 10843.708 - 10896.347: 83.6302% ( 69) 00:10:21.269 10896.347 - 10948.986: 84.4317% ( 99) 00:10:21.269 10948.986 - 11001.626: 85.1198% ( 85) 00:10:21.269 11001.626 - 11054.265: 85.7189% ( 74) 00:10:21.269 11054.265 - 11106.904: 86.1075% ( 48) 00:10:21.269 11106.904 - 11159.544: 86.3828% ( 34) 00:10:21.269 11159.544 - 11212.183: 86.5447% ( 20) 00:10:21.269 11212.183 - 11264.822: 86.7147% ( 21) 00:10:21.269 11264.822 - 11317.462: 86.8119% ( 12) 00:10:21.269 11317.462 - 11370.101: 86.9090% ( 12) 00:10:21.269 11370.101 - 11422.741: 87.0709% ( 20) 00:10:21.269 11422.741 - 11475.380: 87.2652% ( 24) 00:10:21.269 11475.380 - 11528.019: 87.4109% ( 18) 00:10:21.269 11528.019 - 11580.659: 87.7024% ( 36) 00:10:21.269 11580.659 - 11633.298: 87.8400% ( 17) 00:10:21.269 11633.298 - 11685.937: 87.9938% ( 19) 00:10:21.269 11685.937 - 11738.577: 88.0667% ( 9) 00:10:21.269 11738.577 - 11791.216: 88.2205% ( 19) 00:10:21.269 11791.216 - 11843.855: 88.3663% ( 18) 00:10:21.269 11843.855 - 11896.495: 88.4958% ( 16) 00:10:21.269 11896.495 - 11949.134: 88.6577% ( 20) 00:10:21.269 11949.134 - 12001.773: 88.8196% ( 20) 00:10:21.269 12001.773 - 12054.413: 88.8925% ( 9) 00:10:21.269 12054.413 - 12107.052: 89.0382% ( 18) 00:10:21.269 12107.052 - 12159.692: 89.1677% ( 16) 00:10:21.269 12159.692 - 12212.331: 89.2811% ( 14) 00:10:21.269 12212.331 - 12264.970: 89.3944% ( 14) 00:10:21.269 12264.970 - 12317.610: 89.5483% ( 19) 00:10:21.269 12317.610 - 12370.249: 89.7587% ( 26) 00:10:21.269 12370.249 - 12422.888: 89.9449% ( 23) 00:10:21.269 12422.888 - 12475.528: 90.2445% ( 37) 00:10:21.269 12475.528 - 12528.167: 90.5036% ( 32) 00:10:21.269 12528.167 - 12580.806: 90.7545% ( 31) 00:10:21.269 12580.806 - 12633.446: 90.9812% ( 28) 00:10:21.269 12633.446 - 12686.085: 91.1755% ( 24) 00:10:21.269 12686.085 - 12738.724: 91.4022% ( 28) 00:10:21.269 12738.724 - 12791.364: 91.5884% ( 23) 00:10:21.269 12791.364 - 12844.003: 91.7827% ( 24) 00:10:21.269 12844.003 - 12896.643: 91.9932% ( 26) 00:10:21.269 12896.643 - 12949.282: 92.2847% ( 36) 00:10:21.269 12949.282 - 13001.921: 92.4547% ( 21) 00:10:21.269 13001.921 - 13054.561: 92.5599% ( 13) 00:10:21.269 13054.561 - 13107.200: 92.6652% ( 13) 00:10:21.269 13107.200 - 13159.839: 92.7299% ( 8) 00:10:21.269 13159.839 - 13212.479: 92.8837% ( 19) 00:10:21.269 13212.479 - 13265.118: 92.9890% ( 13) 00:10:21.269 13265.118 - 13317.757: 93.1671% ( 22) 00:10:21.269 13317.757 - 13370.397: 93.3371% ( 21) 00:10:21.269 13370.397 - 13423.036: 93.6043% ( 33) 00:10:21.269 13423.036 - 13475.676: 93.8067% ( 25) 00:10:21.269 13475.676 - 13580.954: 93.9524% ( 18) 00:10:21.269 13580.954 - 13686.233: 94.0819% ( 16) 00:10:21.269 13686.233 - 13791.512: 94.1791% ( 12) 00:10:21.269 13791.512 - 13896.790: 94.3329% ( 19) 00:10:21.269 13896.790 - 14002.069: 94.4543% ( 15) 00:10:21.269 14002.069 - 14107.348: 94.6891% ( 29) 00:10:21.270 14107.348 - 14212.627: 94.9968% ( 38) 00:10:21.270 14212.627 - 14317.905: 95.2315% ( 29) 00:10:21.270 14317.905 - 14423.184: 95.5068% ( 34) 00:10:21.270 14423.184 - 14528.463: 95.8549% ( 43) 00:10:21.270 14528.463 - 14633.741: 96.1140% ( 32) 00:10:21.270 14633.741 - 14739.020: 96.3002% ( 23) 00:10:21.270 14739.020 - 14844.299: 96.4702% ( 21) 00:10:21.270 14844.299 - 14949.578: 96.6078% ( 17) 00:10:21.270 14949.578 - 15054.856: 96.7698% ( 20) 00:10:21.270 15054.856 - 15160.135: 96.8831% ( 14) 00:10:21.270 15160.135 - 15265.414: 97.0045% ( 15) 00:10:21.270 15265.414 - 15370.692: 97.1098% ( 13) 00:10:21.270 15370.692 - 15475.971: 97.1422% ( 4) 00:10:21.270 15475.971 - 15581.250: 97.2069% ( 8) 00:10:21.270 15581.250 - 15686.529: 97.2555% ( 6) 00:10:21.270 15686.529 - 15791.807: 97.3041% ( 6) 00:10:21.270 15791.807 - 15897.086: 97.3527% ( 6) 00:10:21.270 15897.086 - 16002.365: 97.4012% ( 6) 00:10:21.270 16002.365 - 16107.643: 97.4093% ( 1) 00:10:21.270 18002.660 - 18107.939: 97.4174% ( 1) 00:10:21.270 18107.939 - 18213.218: 97.4417% ( 3) 00:10:21.270 18213.218 - 18318.496: 97.4579% ( 2) 00:10:21.270 18318.496 - 18423.775: 97.4984% ( 5) 00:10:21.270 18423.775 - 18529.054: 97.5631% ( 8) 00:10:21.270 18529.054 - 18634.333: 97.6846% ( 15) 00:10:21.270 18634.333 - 18739.611: 97.7494% ( 8) 00:10:21.270 18739.611 - 18844.890: 97.8789% ( 16) 00:10:21.270 18844.890 - 18950.169: 97.9841% ( 13) 00:10:21.270 18950.169 - 19055.447: 98.0975% ( 14) 00:10:21.270 19055.447 - 19160.726: 98.1865% ( 11) 00:10:21.270 19160.726 - 19266.005: 98.2270% ( 5) 00:10:21.270 19266.005 - 19371.284: 98.2513% ( 3) 00:10:21.270 19371.284 - 19476.562: 98.2837% ( 4) 00:10:21.270 19476.562 - 19581.841: 98.3161% ( 4) 00:10:21.270 19581.841 - 19687.120: 98.3808% ( 8) 00:10:21.270 19687.120 - 19792.398: 98.4537% ( 9) 00:10:21.270 19792.398 - 19897.677: 98.5023% ( 6) 00:10:21.270 19897.677 - 20002.956: 98.5751% ( 9) 00:10:21.270 20002.956 - 20108.235: 98.6075% ( 4) 00:10:21.270 20108.235 - 20213.513: 98.6318% ( 3) 00:10:21.270 20213.513 - 20318.792: 98.6642% ( 4) 00:10:21.270 20318.792 - 20424.071: 98.7128% ( 6) 00:10:21.270 20424.071 - 20529.349: 98.7209% ( 1) 00:10:21.270 20529.349 - 20634.628: 98.7613% ( 5) 00:10:21.270 20634.628 - 20739.907: 98.7856% ( 3) 00:10:21.270 20739.907 - 20845.186: 98.8180% ( 4) 00:10:21.270 20845.186 - 20950.464: 98.8585% ( 5) 00:10:21.270 20950.464 - 21055.743: 98.8747% ( 2) 00:10:21.270 21055.743 - 21161.022: 98.8990% ( 3) 00:10:21.270 21161.022 - 21266.300: 98.9394% ( 5) 00:10:21.270 21266.300 - 21371.579: 98.9637% ( 3) 00:10:21.270 30951.942 - 31162.500: 99.0204% ( 7) 00:10:21.270 31162.500 - 31373.057: 99.0852% ( 8) 00:10:21.270 31373.057 - 31583.614: 99.2147% ( 16) 00:10:21.270 31583.614 - 31794.172: 99.3280% ( 14) 00:10:21.270 31794.172 - 32004.729: 99.3685% ( 5) 00:10:21.270 32004.729 - 32215.287: 99.3928% ( 3) 00:10:21.270 32215.287 - 32425.844: 99.4333% ( 5) 00:10:21.270 32425.844 - 32636.402: 99.4819% ( 6) 00:10:21.270 38532.010 - 38742.567: 99.4981% ( 2) 00:10:21.270 38742.567 - 38953.124: 99.5466% ( 6) 00:10:21.270 38953.124 - 39163.682: 99.6114% ( 8) 00:10:21.270 39163.682 - 39374.239: 99.6681% ( 7) 00:10:21.270 39374.239 - 39584.797: 99.7247% ( 7) 00:10:21.270 39584.797 - 39795.354: 99.7814% ( 7) 00:10:21.270 39795.354 - 40005.912: 99.8543% ( 9) 00:10:21.270 40005.912 - 40216.469: 99.9109% ( 7) 00:10:21.270 40216.469 - 40427.027: 99.9757% ( 8) 00:10:21.270 40427.027 - 40637.584: 100.0000% ( 3) 00:10:21.270 00:10:21.270 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:21.270 ============================================================================== 00:10:21.270 Range in us Cumulative IO count 00:10:21.270 8211.740 - 8264.379: 0.0081% ( 1) 00:10:21.270 8317.018 - 8369.658: 0.0486% ( 5) 00:10:21.270 8369.658 - 8422.297: 0.1376% ( 11) 00:10:21.270 8422.297 - 8474.937: 0.3805% ( 30) 00:10:21.270 8474.937 - 8527.576: 0.5019% ( 15) 00:10:21.270 8527.576 - 8580.215: 0.6558% ( 19) 00:10:21.270 8580.215 - 8632.855: 0.9472% ( 36) 00:10:21.270 8632.855 - 8685.494: 1.5868% ( 79) 00:10:21.270 8685.494 - 8738.133: 1.9430% ( 44) 00:10:21.270 8738.133 - 8790.773: 2.5340% ( 73) 00:10:21.270 8790.773 - 8843.412: 3.5541% ( 126) 00:10:21.270 8843.412 - 8896.051: 5.6509% ( 259) 00:10:21.270 8896.051 - 8948.691: 8.0959% ( 302) 00:10:21.270 8948.691 - 9001.330: 9.7798% ( 208) 00:10:21.270 9001.330 - 9053.969: 12.1438% ( 292) 00:10:21.270 9053.969 - 9106.609: 14.7749% ( 325) 00:10:21.270 9106.609 - 9159.248: 18.0457% ( 404) 00:10:21.270 9159.248 - 9211.888: 20.6201% ( 318) 00:10:21.270 9211.888 - 9264.527: 23.9637% ( 413) 00:10:21.270 9264.527 - 9317.166: 26.6435% ( 331) 00:10:21.270 9317.166 - 9369.806: 29.4608% ( 348) 00:10:21.270 9369.806 - 9422.445: 31.9705% ( 310) 00:10:21.270 9422.445 - 9475.084: 35.0308% ( 378) 00:10:21.270 9475.084 - 9527.724: 38.3177% ( 406) 00:10:21.270 9527.724 - 9580.363: 42.3251% ( 495) 00:10:21.270 9580.363 - 9633.002: 46.3973% ( 503) 00:10:21.270 9633.002 - 9685.642: 50.6962% ( 531) 00:10:21.270 9685.642 - 9738.281: 54.5094% ( 471) 00:10:21.270 9738.281 - 9790.920: 57.9906% ( 430) 00:10:21.270 9790.920 - 9843.560: 61.0266% ( 375) 00:10:21.270 9843.560 - 9896.199: 63.9492% ( 361) 00:10:21.270 9896.199 - 9948.839: 65.7869% ( 227) 00:10:21.270 9948.839 - 10001.478: 67.6652% ( 232) 00:10:21.270 10001.478 - 10054.117: 69.0010% ( 165) 00:10:21.270 10054.117 - 10106.757: 70.2234% ( 151) 00:10:21.270 10106.757 - 10159.396: 70.9683% ( 92) 00:10:21.270 10159.396 - 10212.035: 72.3122% ( 166) 00:10:21.270 10212.035 - 10264.675: 73.6237% ( 162) 00:10:21.270 10264.675 - 10317.314: 74.7005% ( 133) 00:10:21.270 10317.314 - 10369.953: 76.1172% ( 175) 00:10:21.270 10369.953 - 10422.593: 77.5826% ( 181) 00:10:21.270 10422.593 - 10475.232: 78.5298% ( 117) 00:10:21.270 10475.232 - 10527.871: 79.7037% ( 145) 00:10:21.270 10527.871 - 10580.511: 80.6104% ( 112) 00:10:21.270 10580.511 - 10633.150: 81.3472% ( 91) 00:10:21.270 10633.150 - 10685.790: 82.0029% ( 81) 00:10:21.270 10685.790 - 10738.429: 82.5049% ( 62) 00:10:21.270 10738.429 - 10791.068: 83.0554% ( 68) 00:10:21.270 10791.068 - 10843.708: 83.6626% ( 75) 00:10:21.270 10843.708 - 10896.347: 84.3345% ( 83) 00:10:21.270 10896.347 - 10948.986: 85.1927% ( 106) 00:10:21.270 10948.986 - 11001.626: 85.7027% ( 63) 00:10:21.270 11001.626 - 11054.265: 86.0832% ( 47) 00:10:21.270 11054.265 - 11106.904: 86.3909% ( 38) 00:10:21.270 11106.904 - 11159.544: 86.6580% ( 33) 00:10:21.270 11159.544 - 11212.183: 86.8928% ( 29) 00:10:21.270 11212.183 - 11264.822: 87.0790% ( 23) 00:10:21.270 11264.822 - 11317.462: 87.2895% ( 26) 00:10:21.270 11317.462 - 11370.101: 87.4352% ( 18) 00:10:21.270 11370.101 - 11422.741: 87.6376% ( 25) 00:10:21.270 11422.741 - 11475.380: 87.8238% ( 23) 00:10:21.270 11475.380 - 11528.019: 88.0019% ( 22) 00:10:21.270 11528.019 - 11580.659: 88.1477% ( 18) 00:10:21.270 11580.659 - 11633.298: 88.2853% ( 17) 00:10:21.270 11633.298 - 11685.937: 88.3339% ( 6) 00:10:21.270 11685.937 - 11738.577: 88.3744% ( 5) 00:10:21.270 11738.577 - 11791.216: 88.4148% ( 5) 00:10:21.270 11791.216 - 11843.855: 88.4634% ( 6) 00:10:21.270 11843.855 - 11896.495: 88.6010% ( 17) 00:10:21.270 11896.495 - 11949.134: 88.7953% ( 24) 00:10:21.270 11949.134 - 12001.773: 88.9896% ( 24) 00:10:21.270 12001.773 - 12054.413: 89.2163% ( 28) 00:10:21.270 12054.413 - 12107.052: 89.4430% ( 28) 00:10:21.270 12107.052 - 12159.692: 89.5725% ( 16) 00:10:21.270 12159.692 - 12212.331: 89.7830% ( 26) 00:10:21.270 12212.331 - 12264.970: 89.9692% ( 23) 00:10:21.270 12264.970 - 12317.610: 90.1312% ( 20) 00:10:21.270 12317.610 - 12370.249: 90.2931% ( 20) 00:10:21.270 12370.249 - 12422.888: 90.4064% ( 14) 00:10:21.270 12422.888 - 12475.528: 90.5117% ( 13) 00:10:21.270 12475.528 - 12528.167: 90.6169% ( 13) 00:10:21.270 12528.167 - 12580.806: 90.7302% ( 14) 00:10:21.270 12580.806 - 12633.446: 90.8436% ( 14) 00:10:21.270 12633.446 - 12686.085: 90.9245% ( 10) 00:10:21.270 12686.085 - 12738.724: 91.0217% ( 12) 00:10:21.270 12738.724 - 12791.364: 91.1188% ( 12) 00:10:21.270 12791.364 - 12844.003: 91.2403% ( 15) 00:10:21.270 12844.003 - 12896.643: 91.3941% ( 19) 00:10:21.270 12896.643 - 12949.282: 91.7989% ( 50) 00:10:21.270 12949.282 - 13001.921: 92.2280% ( 53) 00:10:21.270 13001.921 - 13054.561: 92.4870% ( 32) 00:10:21.270 13054.561 - 13107.200: 92.7218% ( 29) 00:10:21.270 13107.200 - 13159.839: 92.9242% ( 25) 00:10:21.270 13159.839 - 13212.479: 93.1428% ( 27) 00:10:21.270 13212.479 - 13265.118: 93.3209% ( 22) 00:10:21.270 13265.118 - 13317.757: 93.4585% ( 17) 00:10:21.270 13317.757 - 13370.397: 93.5557% ( 12) 00:10:21.270 13370.397 - 13423.036: 93.6367% ( 10) 00:10:21.270 13423.036 - 13475.676: 93.6852% ( 6) 00:10:21.270 13475.676 - 13580.954: 93.8633% ( 22) 00:10:21.270 13580.954 - 13686.233: 94.0981% ( 29) 00:10:21.270 13686.233 - 13791.512: 94.3248% ( 28) 00:10:21.270 13791.512 - 13896.790: 94.5677% ( 30) 00:10:21.270 13896.790 - 14002.069: 94.7458% ( 22) 00:10:21.270 14002.069 - 14107.348: 94.8348% ( 11) 00:10:21.271 14107.348 - 14212.627: 95.1101% ( 34) 00:10:21.271 14212.627 - 14317.905: 95.3773% ( 33) 00:10:21.271 14317.905 - 14423.184: 95.6282% ( 31) 00:10:21.271 14423.184 - 14528.463: 95.7659% ( 17) 00:10:21.271 14528.463 - 14633.741: 95.8873% ( 15) 00:10:21.271 14633.741 - 14739.020: 96.0249% ( 17) 00:10:21.271 14739.020 - 14844.299: 96.1464% ( 15) 00:10:21.271 14844.299 - 14949.578: 96.2435% ( 12) 00:10:21.271 14949.578 - 15054.856: 96.3650% ( 15) 00:10:21.271 15054.856 - 15160.135: 96.4945% ( 16) 00:10:21.271 15160.135 - 15265.414: 96.6564% ( 20) 00:10:21.271 15265.414 - 15370.692: 96.8912% ( 29) 00:10:21.271 15370.692 - 15475.971: 96.9964% ( 13) 00:10:21.271 15475.971 - 15581.250: 97.0936% ( 12) 00:10:21.271 15581.250 - 15686.529: 97.1745% ( 10) 00:10:21.271 15686.529 - 15791.807: 97.3041% ( 16) 00:10:21.271 15791.807 - 15897.086: 97.4012% ( 12) 00:10:21.271 15897.086 - 16002.365: 97.4093% ( 1) 00:10:21.271 18213.218 - 18318.496: 97.5146% ( 13) 00:10:21.271 18318.496 - 18423.775: 97.5793% ( 8) 00:10:21.271 18423.775 - 18529.054: 97.6036% ( 3) 00:10:21.271 18529.054 - 18634.333: 97.6279% ( 3) 00:10:21.271 18634.333 - 18739.611: 97.6603% ( 4) 00:10:21.271 18739.611 - 18844.890: 97.7251% ( 8) 00:10:21.271 18844.890 - 18950.169: 97.8627% ( 17) 00:10:21.271 18950.169 - 19055.447: 97.9275% ( 8) 00:10:21.271 19055.447 - 19160.726: 97.9760% ( 6) 00:10:21.271 19160.726 - 19266.005: 98.0570% ( 10) 00:10:21.271 19266.005 - 19371.284: 98.1299% ( 9) 00:10:21.271 19371.284 - 19476.562: 98.2027% ( 9) 00:10:21.271 19476.562 - 19581.841: 98.2594% ( 7) 00:10:21.271 19581.841 - 19687.120: 98.2999% ( 5) 00:10:21.271 19687.120 - 19792.398: 98.3323% ( 4) 00:10:21.271 19792.398 - 19897.677: 98.3646% ( 4) 00:10:21.271 19897.677 - 20002.956: 98.3970% ( 4) 00:10:21.271 20002.956 - 20108.235: 98.4942% ( 12) 00:10:21.271 20108.235 - 20213.513: 98.5670% ( 9) 00:10:21.271 20213.513 - 20318.792: 98.6318% ( 8) 00:10:21.271 20318.792 - 20424.071: 98.7370% ( 13) 00:10:21.271 20424.071 - 20529.349: 98.8180% ( 10) 00:10:21.271 20529.349 - 20634.628: 98.8423% ( 3) 00:10:21.271 20634.628 - 20739.907: 98.8666% ( 3) 00:10:21.271 20739.907 - 20845.186: 98.8990% ( 4) 00:10:21.271 20845.186 - 20950.464: 98.9233% ( 3) 00:10:21.271 20950.464 - 21055.743: 98.9475% ( 3) 00:10:21.271 21055.743 - 21161.022: 98.9637% ( 2) 00:10:21.271 29056.925 - 29267.483: 98.9961% ( 4) 00:10:21.271 29267.483 - 29478.040: 99.0528% ( 7) 00:10:21.271 29478.040 - 29688.598: 99.1176% ( 8) 00:10:21.271 29688.598 - 29899.155: 99.1823% ( 8) 00:10:21.271 29899.155 - 30109.712: 99.2390% ( 7) 00:10:21.271 30109.712 - 30320.270: 99.3119% ( 9) 00:10:21.271 30320.270 - 30530.827: 99.3766% ( 8) 00:10:21.271 30530.827 - 30741.385: 99.4414% ( 8) 00:10:21.271 30741.385 - 30951.942: 99.4819% ( 5) 00:10:21.271 37058.108 - 37268.665: 99.5385% ( 7) 00:10:21.271 37268.665 - 37479.222: 99.6033% ( 8) 00:10:21.271 37479.222 - 37689.780: 99.6600% ( 7) 00:10:21.271 37689.780 - 37900.337: 99.7247% ( 8) 00:10:21.271 37900.337 - 38110.895: 99.7976% ( 9) 00:10:21.271 38110.895 - 38321.452: 99.8624% ( 8) 00:10:21.271 38321.452 - 38532.010: 99.9271% ( 8) 00:10:21.271 38532.010 - 38742.567: 99.9919% ( 8) 00:10:21.271 38742.567 - 38953.124: 100.0000% ( 1) 00:10:21.271 00:10:21.271 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:21.271 ============================================================================== 00:10:21.271 Range in us Cumulative IO count 00:10:21.271 8106.461 - 8159.100: 0.0162% ( 2) 00:10:21.271 8264.379 - 8317.018: 0.0243% ( 1) 00:10:21.271 8317.018 - 8369.658: 0.0729% ( 6) 00:10:21.271 8369.658 - 8422.297: 0.2348% ( 20) 00:10:21.271 8422.297 - 8474.937: 0.3562% ( 15) 00:10:21.271 8474.937 - 8527.576: 0.6720% ( 39) 00:10:21.271 8527.576 - 8580.215: 1.2468% ( 71) 00:10:21.271 8580.215 - 8632.855: 1.8216% ( 71) 00:10:21.271 8632.855 - 8685.494: 3.0036% ( 146) 00:10:21.271 8685.494 - 8738.133: 3.6593% ( 81) 00:10:21.271 8738.133 - 8790.773: 4.7118% ( 130) 00:10:21.271 8790.773 - 8843.412: 5.6995% ( 122) 00:10:21.271 8843.412 - 8896.051: 6.7034% ( 124) 00:10:21.271 8896.051 - 8948.691: 8.6302% ( 238) 00:10:21.271 8948.691 - 9001.330: 10.5651% ( 239) 00:10:21.271 9001.330 - 9053.969: 12.3219% ( 217) 00:10:21.271 9053.969 - 9106.609: 14.3378% ( 249) 00:10:21.271 9106.609 - 9159.248: 16.4022% ( 255) 00:10:21.271 9159.248 - 9211.888: 19.0334% ( 325) 00:10:21.271 9211.888 - 9264.527: 22.5631% ( 436) 00:10:21.271 9264.527 - 9317.166: 25.9229% ( 415) 00:10:21.271 9317.166 - 9369.806: 29.2422% ( 410) 00:10:21.271 9369.806 - 9422.445: 32.4644% ( 398) 00:10:21.271 9422.445 - 9475.084: 36.0508% ( 443) 00:10:21.271 9475.084 - 9527.724: 39.6778% ( 448) 00:10:21.271 9527.724 - 9580.363: 42.9323% ( 402) 00:10:21.271 9580.363 - 9633.002: 46.1221% ( 394) 00:10:21.271 9633.002 - 9685.642: 48.8666% ( 339) 00:10:21.271 9685.642 - 9738.281: 51.7325% ( 354) 00:10:21.271 9738.281 - 9790.920: 55.2623% ( 436) 00:10:21.271 9790.920 - 9843.560: 58.4845% ( 398) 00:10:21.271 9843.560 - 9896.199: 61.4152% ( 362) 00:10:21.271 9896.199 - 9948.839: 63.6496% ( 276) 00:10:21.271 9948.839 - 10001.478: 65.9650% ( 286) 00:10:21.271 10001.478 - 10054.117: 67.8595% ( 234) 00:10:21.271 10054.117 - 10106.757: 69.3329% ( 182) 00:10:21.271 10106.757 - 10159.396: 70.5554% ( 151) 00:10:21.271 10159.396 - 10212.035: 71.8264% ( 157) 00:10:21.271 10212.035 - 10264.675: 73.3565% ( 189) 00:10:21.271 10264.675 - 10317.314: 75.0081% ( 204) 00:10:21.271 10317.314 - 10369.953: 76.3115% ( 161) 00:10:21.271 10369.953 - 10422.593: 77.8983% ( 196) 00:10:21.271 10422.593 - 10475.232: 79.0803% ( 146) 00:10:21.271 10475.232 - 10527.871: 80.5133% ( 177) 00:10:21.271 10527.871 - 10580.511: 81.4686% ( 118) 00:10:21.271 10580.511 - 10633.150: 82.1972% ( 90) 00:10:21.271 10633.150 - 10685.790: 82.9825% ( 97) 00:10:21.271 10685.790 - 10738.429: 83.5411% ( 69) 00:10:21.271 10738.429 - 10791.068: 84.2617% ( 89) 00:10:21.271 10791.068 - 10843.708: 85.0065% ( 92) 00:10:21.271 10843.708 - 10896.347: 85.5165% ( 63) 00:10:21.271 10896.347 - 10948.986: 86.0347% ( 64) 00:10:21.271 10948.986 - 11001.626: 86.4071% ( 46) 00:10:21.271 11001.626 - 11054.265: 86.5852% ( 22) 00:10:21.271 11054.265 - 11106.904: 86.7228% ( 17) 00:10:21.271 11106.904 - 11159.544: 86.9171% ( 24) 00:10:21.271 11159.544 - 11212.183: 87.1762% ( 32) 00:10:21.271 11212.183 - 11264.822: 87.4190% ( 30) 00:10:21.271 11264.822 - 11317.462: 87.5810% ( 20) 00:10:21.271 11317.462 - 11370.101: 87.7915% ( 26) 00:10:21.271 11370.101 - 11422.741: 88.1396% ( 43) 00:10:21.271 11422.741 - 11475.380: 88.3582% ( 27) 00:10:21.271 11475.380 - 11528.019: 88.5039% ( 18) 00:10:21.271 11528.019 - 11580.659: 88.6658% ( 20) 00:10:21.271 11580.659 - 11633.298: 88.7872% ( 15) 00:10:21.271 11633.298 - 11685.937: 88.8925% ( 13) 00:10:21.271 11685.937 - 11738.577: 88.9977% ( 13) 00:10:21.271 11738.577 - 11791.216: 89.1192% ( 15) 00:10:21.271 11791.216 - 11843.855: 89.2325% ( 14) 00:10:21.271 11843.855 - 11896.495: 89.3459% ( 14) 00:10:21.271 11896.495 - 11949.134: 89.4592% ( 14) 00:10:21.271 11949.134 - 12001.773: 89.6778% ( 27) 00:10:21.271 12001.773 - 12054.413: 89.8316% ( 19) 00:10:21.271 12054.413 - 12107.052: 89.9126% ( 10) 00:10:21.271 12107.052 - 12159.692: 89.9773% ( 8) 00:10:21.271 12159.692 - 12212.331: 90.0583% ( 10) 00:10:21.271 12212.331 - 12264.970: 90.1635% ( 13) 00:10:21.271 12264.970 - 12317.610: 90.3174% ( 19) 00:10:21.271 12317.610 - 12370.249: 90.4388% ( 15) 00:10:21.271 12370.249 - 12422.888: 90.5440% ( 13) 00:10:21.271 12422.888 - 12475.528: 90.7060% ( 20) 00:10:21.271 12475.528 - 12528.167: 90.8193% ( 14) 00:10:21.271 12528.167 - 12580.806: 90.9893% ( 21) 00:10:21.271 12580.806 - 12633.446: 91.3051% ( 39) 00:10:21.271 12633.446 - 12686.085: 91.4994% ( 24) 00:10:21.271 12686.085 - 12738.724: 91.8232% ( 40) 00:10:21.271 12738.724 - 12791.364: 92.0661% ( 30) 00:10:21.271 12791.364 - 12844.003: 92.2685% ( 25) 00:10:21.271 12844.003 - 12896.643: 92.4790% ( 26) 00:10:21.271 12896.643 - 12949.282: 92.6975% ( 27) 00:10:21.271 12949.282 - 13001.921: 92.8433% ( 18) 00:10:21.271 13001.921 - 13054.561: 92.9971% ( 19) 00:10:21.271 13054.561 - 13107.200: 93.1266% ( 16) 00:10:21.271 13107.200 - 13159.839: 93.2723% ( 18) 00:10:21.271 13159.839 - 13212.479: 93.4181% ( 18) 00:10:21.271 13212.479 - 13265.118: 93.5719% ( 19) 00:10:21.271 13265.118 - 13317.757: 93.8552% ( 35) 00:10:21.271 13317.757 - 13370.397: 94.0495% ( 24) 00:10:21.271 13370.397 - 13423.036: 94.2600% ( 26) 00:10:21.271 13423.036 - 13475.676: 94.4058% ( 18) 00:10:21.271 13475.676 - 13580.954: 94.6891% ( 35) 00:10:21.271 13580.954 - 13686.233: 94.9158% ( 28) 00:10:21.271 13686.233 - 13791.512: 95.0696% ( 19) 00:10:21.271 13791.512 - 13896.790: 95.2234% ( 19) 00:10:21.271 13896.790 - 14002.069: 95.3206% ( 12) 00:10:21.271 14002.069 - 14107.348: 95.4177% ( 12) 00:10:21.271 14107.348 - 14212.627: 95.5878% ( 21) 00:10:21.271 14212.627 - 14317.905: 95.6768% ( 11) 00:10:21.272 14317.905 - 14423.184: 95.7740% ( 12) 00:10:21.272 14423.184 - 14528.463: 95.8225% ( 6) 00:10:21.272 14528.463 - 14633.741: 95.8873% ( 8) 00:10:21.272 14633.741 - 14739.020: 95.9926% ( 13) 00:10:21.272 14739.020 - 14844.299: 96.1949% ( 25) 00:10:21.272 14844.299 - 14949.578: 96.3083% ( 14) 00:10:21.272 14949.578 - 15054.856: 96.4135% ( 13) 00:10:21.272 15054.856 - 15160.135: 96.5512% ( 17) 00:10:21.272 15160.135 - 15265.414: 96.7859% ( 29) 00:10:21.272 15265.414 - 15370.692: 96.8102% ( 3) 00:10:21.272 15370.692 - 15475.971: 96.8345% ( 3) 00:10:21.272 15475.971 - 15581.250: 96.8507% ( 2) 00:10:21.272 15581.250 - 15686.529: 96.8750% ( 3) 00:10:21.272 15686.529 - 15791.807: 96.8831% ( 1) 00:10:21.272 15791.807 - 15897.086: 96.8912% ( 1) 00:10:21.272 15897.086 - 16002.365: 96.8993% ( 1) 00:10:21.272 16002.365 - 16107.643: 96.9074% ( 1) 00:10:21.272 16212.922 - 16318.201: 96.9560% ( 6) 00:10:21.272 16318.201 - 16423.480: 97.0369% ( 10) 00:10:21.272 16423.480 - 16528.758: 97.2717% ( 29) 00:10:21.272 16528.758 - 16634.037: 97.3365% ( 8) 00:10:21.272 16634.037 - 16739.316: 97.3688% ( 4) 00:10:21.272 16739.316 - 16844.594: 97.4012% ( 4) 00:10:21.272 16844.594 - 16949.873: 97.4093% ( 1) 00:10:21.272 17265.709 - 17370.988: 97.4741% ( 8) 00:10:21.272 17370.988 - 17476.267: 97.4984% ( 3) 00:10:21.272 17476.267 - 17581.545: 97.5389% ( 5) 00:10:21.272 17581.545 - 17686.824: 97.5712% ( 4) 00:10:21.272 17686.824 - 17792.103: 97.6117% ( 5) 00:10:21.272 17792.103 - 17897.382: 97.6522% ( 5) 00:10:21.272 17897.382 - 18002.660: 97.6846% ( 4) 00:10:21.272 18002.660 - 18107.939: 97.7251% ( 5) 00:10:21.272 18107.939 - 18213.218: 97.7655% ( 5) 00:10:21.272 18213.218 - 18318.496: 97.8060% ( 5) 00:10:21.272 18318.496 - 18423.775: 97.8384% ( 4) 00:10:21.272 18423.775 - 18529.054: 97.8708% ( 4) 00:10:21.272 18529.054 - 18634.333: 97.9113% ( 5) 00:10:21.272 18634.333 - 18739.611: 97.9275% ( 2) 00:10:21.272 19687.120 - 19792.398: 97.9356% ( 1) 00:10:21.272 19792.398 - 19897.677: 98.0246% ( 11) 00:10:21.272 19897.677 - 20002.956: 98.1056% ( 10) 00:10:21.272 20002.956 - 20108.235: 98.1784% ( 9) 00:10:21.272 20108.235 - 20213.513: 98.2756% ( 12) 00:10:21.272 20213.513 - 20318.792: 98.2999% ( 3) 00:10:21.272 20318.792 - 20424.071: 98.3242% ( 3) 00:10:21.272 20424.071 - 20529.349: 98.4051% ( 10) 00:10:21.272 20529.349 - 20634.628: 98.4618% ( 7) 00:10:21.272 20634.628 - 20739.907: 98.5185% ( 7) 00:10:21.272 20739.907 - 20845.186: 98.5751% ( 7) 00:10:21.272 20845.186 - 20950.464: 98.6237% ( 6) 00:10:21.272 20950.464 - 21055.743: 98.6642% ( 5) 00:10:21.272 21055.743 - 21161.022: 98.7047% ( 5) 00:10:21.272 21161.022 - 21266.300: 98.7370% ( 4) 00:10:21.272 21266.300 - 21371.579: 98.7775% ( 5) 00:10:21.272 21371.579 - 21476.858: 98.8099% ( 4) 00:10:21.272 21476.858 - 21582.137: 98.8504% ( 5) 00:10:21.272 21582.137 - 21687.415: 98.8909% ( 5) 00:10:21.272 21687.415 - 21792.694: 98.9233% ( 4) 00:10:21.272 21792.694 - 21897.973: 98.9637% ( 5) 00:10:21.272 28635.810 - 28846.368: 98.9799% ( 2) 00:10:21.272 28846.368 - 29056.925: 99.0366% ( 7) 00:10:21.272 29056.925 - 29267.483: 99.1095% ( 9) 00:10:21.272 29267.483 - 29478.040: 99.1742% ( 8) 00:10:21.272 29478.040 - 29688.598: 99.2390% ( 8) 00:10:21.272 29688.598 - 29899.155: 99.2876% ( 6) 00:10:21.272 29899.155 - 30109.712: 99.3523% ( 8) 00:10:21.272 30109.712 - 30320.270: 99.4171% ( 8) 00:10:21.272 30320.270 - 30530.827: 99.4819% ( 8) 00:10:21.272 36426.435 - 36636.993: 99.4981% ( 2) 00:10:21.272 36636.993 - 36847.550: 99.5628% ( 8) 00:10:21.272 36847.550 - 37058.108: 99.6195% ( 7) 00:10:21.272 37058.108 - 37268.665: 99.6843% ( 8) 00:10:21.272 37268.665 - 37479.222: 99.7490% ( 8) 00:10:21.272 37479.222 - 37689.780: 99.8138% ( 8) 00:10:21.272 37689.780 - 37900.337: 99.8786% ( 8) 00:10:21.272 37900.337 - 38110.895: 99.9433% ( 8) 00:10:21.272 38110.895 - 38321.452: 100.0000% ( 7) 00:10:21.272 00:10:21.272 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:21.272 ============================================================================== 00:10:21.272 Range in us Cumulative IO count 00:10:21.272 8001.182 - 8053.822: 0.0081% ( 1) 00:10:21.272 8159.100 - 8211.740: 0.0324% ( 3) 00:10:21.272 8211.740 - 8264.379: 0.0891% ( 7) 00:10:21.272 8264.379 - 8317.018: 0.1943% ( 13) 00:10:21.272 8317.018 - 8369.658: 0.3400% ( 18) 00:10:21.272 8369.658 - 8422.297: 0.5910% ( 31) 00:10:21.272 8422.297 - 8474.937: 0.7529% ( 20) 00:10:21.272 8474.937 - 8527.576: 0.9310% ( 22) 00:10:21.272 8527.576 - 8580.215: 1.1982% ( 33) 00:10:21.272 8580.215 - 8632.855: 1.6192% ( 52) 00:10:21.272 8632.855 - 8685.494: 2.2668% ( 80) 00:10:21.272 8685.494 - 8738.133: 3.1250% ( 106) 00:10:21.272 8738.133 - 8790.773: 4.2179% ( 135) 00:10:21.272 8790.773 - 8843.412: 5.2380% ( 126) 00:10:21.272 8843.412 - 8896.051: 6.4119% ( 145) 00:10:21.272 8896.051 - 8948.691: 7.9987% ( 196) 00:10:21.272 8948.691 - 9001.330: 9.6665% ( 206) 00:10:21.272 9001.330 - 9053.969: 11.4233% ( 217) 00:10:21.272 9053.969 - 9106.609: 13.5848% ( 267) 00:10:21.272 9106.609 - 9159.248: 16.4265% ( 351) 00:10:21.272 9159.248 - 9211.888: 19.0900% ( 329) 00:10:21.272 9211.888 - 9264.527: 22.1017% ( 372) 00:10:21.272 9264.527 - 9317.166: 25.2672% ( 391) 00:10:21.272 9317.166 - 9369.806: 28.3598% ( 382) 00:10:21.272 9369.806 - 9422.445: 31.5738% ( 397) 00:10:21.272 9422.445 - 9475.084: 35.3384% ( 465) 00:10:21.272 9475.084 - 9527.724: 39.2001% ( 477) 00:10:21.272 9527.724 - 9580.363: 43.1347% ( 486) 00:10:21.272 9580.363 - 9633.002: 47.2555% ( 509) 00:10:21.272 9633.002 - 9685.642: 50.6801% ( 423) 00:10:21.272 9685.642 - 9738.281: 53.6593% ( 368) 00:10:21.272 9738.281 - 9790.920: 56.6467% ( 369) 00:10:21.272 9790.920 - 9843.560: 59.6907% ( 376) 00:10:21.272 9843.560 - 9896.199: 62.5729% ( 356) 00:10:21.272 9896.199 - 9948.839: 64.5078% ( 239) 00:10:21.272 9948.839 - 10001.478: 66.3051% ( 222) 00:10:21.272 10001.478 - 10054.117: 67.9242% ( 200) 00:10:21.272 10054.117 - 10106.757: 69.2438% ( 163) 00:10:21.272 10106.757 - 10159.396: 70.5149% ( 157) 00:10:21.272 10159.396 - 10212.035: 71.6159% ( 136) 00:10:21.272 10212.035 - 10264.675: 72.9113% ( 160) 00:10:21.272 10264.675 - 10317.314: 74.4981% ( 196) 00:10:21.272 10317.314 - 10369.953: 75.7448% ( 154) 00:10:21.272 10369.953 - 10422.593: 77.2426% ( 185) 00:10:21.272 10422.593 - 10475.232: 78.3193% ( 133) 00:10:21.272 10475.232 - 10527.871: 79.6065% ( 159) 00:10:21.272 10527.871 - 10580.511: 80.3756% ( 95) 00:10:21.272 10580.511 - 10633.150: 81.2338% ( 106) 00:10:21.272 10633.150 - 10685.790: 81.9139% ( 84) 00:10:21.272 10685.790 - 10738.429: 82.7720% ( 106) 00:10:21.272 10738.429 - 10791.068: 83.3225% ( 68) 00:10:21.272 10791.068 - 10843.708: 83.9054% ( 72) 00:10:21.272 10843.708 - 10896.347: 84.4398% ( 66) 00:10:21.272 10896.347 - 10948.986: 85.1684% ( 90) 00:10:21.272 10948.986 - 11001.626: 85.6865% ( 64) 00:10:21.272 11001.626 - 11054.265: 86.0266% ( 42) 00:10:21.272 11054.265 - 11106.904: 86.3018% ( 34) 00:10:21.272 11106.904 - 11159.544: 86.5447% ( 30) 00:10:21.272 11159.544 - 11212.183: 86.8685% ( 40) 00:10:21.272 11212.183 - 11264.822: 87.0628% ( 24) 00:10:21.272 11264.822 - 11317.462: 87.2166% ( 19) 00:10:21.272 11317.462 - 11370.101: 87.4109% ( 24) 00:10:21.272 11370.101 - 11422.741: 87.6295% ( 27) 00:10:21.272 11422.741 - 11475.380: 87.7834% ( 19) 00:10:21.272 11475.380 - 11528.019: 87.9858% ( 25) 00:10:21.272 11528.019 - 11580.659: 88.1558% ( 21) 00:10:21.272 11580.659 - 11633.298: 88.3420% ( 23) 00:10:21.272 11633.298 - 11685.937: 88.4472% ( 13) 00:10:21.272 11685.937 - 11738.577: 88.5282% ( 10) 00:10:21.272 11738.577 - 11791.216: 88.7549% ( 28) 00:10:21.272 11791.216 - 11843.855: 88.8925% ( 17) 00:10:21.272 11843.855 - 11896.495: 88.9896% ( 12) 00:10:21.272 11896.495 - 11949.134: 89.0625% ( 9) 00:10:21.272 11949.134 - 12001.773: 89.1677% ( 13) 00:10:21.272 12001.773 - 12054.413: 89.4349% ( 33) 00:10:21.272 12054.413 - 12107.052: 89.5644% ( 16) 00:10:21.272 12107.052 - 12159.692: 89.6535% ( 11) 00:10:21.272 12159.692 - 12212.331: 89.7992% ( 18) 00:10:21.272 12212.331 - 12264.970: 89.9530% ( 19) 00:10:21.272 12264.970 - 12317.610: 90.1635% ( 26) 00:10:21.272 12317.610 - 12370.249: 90.2526% ( 11) 00:10:21.272 12370.249 - 12422.888: 90.3174% ( 8) 00:10:21.272 12422.888 - 12475.528: 90.4064% ( 11) 00:10:21.272 12475.528 - 12528.167: 90.6088% ( 25) 00:10:21.272 12528.167 - 12580.806: 90.9893% ( 47) 00:10:21.272 12580.806 - 12633.446: 91.2808% ( 36) 00:10:21.272 12633.446 - 12686.085: 91.5641% ( 35) 00:10:21.272 12686.085 - 12738.724: 91.9365% ( 46) 00:10:21.272 12738.724 - 12791.364: 92.2927% ( 44) 00:10:21.272 12791.364 - 12844.003: 92.6490% ( 44) 00:10:21.272 12844.003 - 12896.643: 92.9242% ( 34) 00:10:21.272 12896.643 - 12949.282: 93.0295% ( 13) 00:10:21.272 12949.282 - 13001.921: 93.1104% ( 10) 00:10:21.272 13001.921 - 13054.561: 93.2157% ( 13) 00:10:21.272 13054.561 - 13107.200: 93.3209% ( 13) 00:10:21.272 13107.200 - 13159.839: 93.4343% ( 14) 00:10:21.272 13159.839 - 13212.479: 93.5395% ( 13) 00:10:21.272 13212.479 - 13265.118: 93.6771% ( 17) 00:10:21.272 13265.118 - 13317.757: 93.8067% ( 16) 00:10:21.272 13317.757 - 13370.397: 93.9929% ( 23) 00:10:21.272 13370.397 - 13423.036: 94.0738% ( 10) 00:10:21.272 13423.036 - 13475.676: 94.1710% ( 12) 00:10:21.272 13475.676 - 13580.954: 94.4058% ( 29) 00:10:21.273 13580.954 - 13686.233: 94.6405% ( 29) 00:10:21.273 13686.233 - 13791.512: 94.9644% ( 40) 00:10:21.273 13791.512 - 13896.790: 95.2963% ( 41) 00:10:21.273 13896.790 - 14002.069: 95.5311% ( 29) 00:10:21.273 14002.069 - 14107.348: 95.6606% ( 16) 00:10:21.273 14107.348 - 14212.627: 95.7173% ( 7) 00:10:21.273 14212.627 - 14317.905: 95.7416% ( 3) 00:10:21.273 14317.905 - 14423.184: 95.7740% ( 4) 00:10:21.273 14423.184 - 14528.463: 95.7983% ( 3) 00:10:21.273 14528.463 - 14633.741: 95.8225% ( 3) 00:10:21.273 14633.741 - 14739.020: 95.9278% ( 13) 00:10:21.273 14739.020 - 14844.299: 96.0411% ( 14) 00:10:21.273 14844.299 - 14949.578: 96.2516% ( 26) 00:10:21.273 14949.578 - 15054.856: 96.5188% ( 33) 00:10:21.273 15054.856 - 15160.135: 96.7212% ( 25) 00:10:21.273 15160.135 - 15265.414: 96.8021% ( 10) 00:10:21.273 15265.414 - 15370.692: 96.8345% ( 4) 00:10:21.273 15370.692 - 15475.971: 96.8588% ( 3) 00:10:21.273 15475.971 - 15581.250: 96.8750% ( 2) 00:10:21.273 15581.250 - 15686.529: 96.8912% ( 2) 00:10:21.273 16528.758 - 16634.037: 96.9074% ( 2) 00:10:21.273 16634.037 - 16739.316: 96.9802% ( 9) 00:10:21.273 16739.316 - 16844.594: 97.1341% ( 19) 00:10:21.273 16844.594 - 16949.873: 97.4417% ( 38) 00:10:21.273 16949.873 - 17055.152: 97.5712% ( 16) 00:10:21.273 17055.152 - 17160.431: 97.6441% ( 9) 00:10:21.273 17160.431 - 17265.709: 97.6765% ( 4) 00:10:21.273 17265.709 - 17370.988: 97.7008% ( 3) 00:10:21.273 17370.988 - 17476.267: 97.7251% ( 3) 00:10:21.273 17476.267 - 17581.545: 97.7655% ( 5) 00:10:21.273 17581.545 - 17686.824: 97.8060% ( 5) 00:10:21.273 17686.824 - 17792.103: 97.8384% ( 4) 00:10:21.273 17792.103 - 17897.382: 97.8708% ( 4) 00:10:21.273 17897.382 - 18002.660: 97.9032% ( 4) 00:10:21.273 18002.660 - 18107.939: 97.9275% ( 3) 00:10:21.273 19581.841 - 19687.120: 98.0489% ( 15) 00:10:21.273 19687.120 - 19792.398: 98.1218% ( 9) 00:10:21.273 19792.398 - 19897.677: 98.1703% ( 6) 00:10:21.273 19897.677 - 20002.956: 98.3080% ( 17) 00:10:21.273 20002.956 - 20108.235: 98.4132% ( 13) 00:10:21.273 20108.235 - 20213.513: 98.4861% ( 9) 00:10:21.273 20213.513 - 20318.792: 98.5347% ( 6) 00:10:21.273 20318.792 - 20424.071: 98.5832% ( 6) 00:10:21.273 20424.071 - 20529.349: 98.6480% ( 8) 00:10:21.273 20529.349 - 20634.628: 98.7290% ( 10) 00:10:21.273 20634.628 - 20739.907: 98.7937% ( 8) 00:10:21.273 20739.907 - 20845.186: 98.8666% ( 9) 00:10:21.273 20845.186 - 20950.464: 98.9071% ( 5) 00:10:21.273 20950.464 - 21055.743: 98.9394% ( 4) 00:10:21.273 21055.743 - 21161.022: 98.9637% ( 3) 00:10:21.273 26951.351 - 27161.908: 99.0285% ( 8) 00:10:21.273 27161.908 - 27372.466: 99.0933% ( 8) 00:10:21.273 27372.466 - 27583.023: 99.1580% ( 8) 00:10:21.273 27583.023 - 27793.581: 99.2147% ( 7) 00:10:21.273 27793.581 - 28004.138: 99.2714% ( 7) 00:10:21.273 28004.138 - 28214.696: 99.3361% ( 8) 00:10:21.273 28214.696 - 28425.253: 99.4009% ( 8) 00:10:21.273 28425.253 - 28635.810: 99.4657% ( 8) 00:10:21.273 28635.810 - 28846.368: 99.4819% ( 2) 00:10:21.273 34531.418 - 34741.976: 99.5142% ( 4) 00:10:21.273 34741.976 - 34952.533: 99.5790% ( 8) 00:10:21.273 34952.533 - 35163.091: 99.6438% ( 8) 00:10:21.273 35163.091 - 35373.648: 99.7085% ( 8) 00:10:21.273 35373.648 - 35584.206: 99.7733% ( 8) 00:10:21.273 35584.206 - 35794.763: 99.8381% ( 8) 00:10:21.273 35794.763 - 36005.320: 99.8948% ( 7) 00:10:21.273 36005.320 - 36215.878: 99.9514% ( 7) 00:10:21.273 36215.878 - 36426.435: 100.0000% ( 6) 00:10:21.273 00:10:21.273 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:21.273 ============================================================================== 00:10:21.273 Range in us Cumulative IO count 00:10:21.273 8053.822 - 8106.461: 0.0081% ( 1) 00:10:21.273 8106.461 - 8159.100: 0.0567% ( 6) 00:10:21.273 8159.100 - 8211.740: 0.1295% ( 9) 00:10:21.273 8211.740 - 8264.379: 0.2348% ( 13) 00:10:21.273 8264.379 - 8317.018: 0.4129% ( 22) 00:10:21.273 8317.018 - 8369.658: 0.5262% ( 14) 00:10:21.273 8369.658 - 8422.297: 0.7205% ( 24) 00:10:21.273 8422.297 - 8474.937: 0.8824% ( 20) 00:10:21.273 8474.937 - 8527.576: 1.1253% ( 30) 00:10:21.273 8527.576 - 8580.215: 1.3925% ( 33) 00:10:21.273 8580.215 - 8632.855: 1.6273% ( 29) 00:10:21.273 8632.855 - 8685.494: 1.8863% ( 32) 00:10:21.273 8685.494 - 8738.133: 2.3073% ( 52) 00:10:21.273 8738.133 - 8790.773: 2.9955% ( 85) 00:10:21.273 8790.773 - 8843.412: 3.8131% ( 101) 00:10:21.273 8843.412 - 8896.051: 5.0356% ( 151) 00:10:21.273 8896.051 - 8948.691: 6.8653% ( 226) 00:10:21.273 8948.691 - 9001.330: 9.0026% ( 264) 00:10:21.273 9001.330 - 9053.969: 11.6337% ( 325) 00:10:21.273 9053.969 - 9106.609: 14.1354% ( 309) 00:10:21.273 9106.609 - 9159.248: 17.2442% ( 384) 00:10:21.273 9159.248 - 9211.888: 20.1182% ( 355) 00:10:21.273 9211.888 - 9264.527: 23.4294% ( 409) 00:10:21.273 9264.527 - 9317.166: 27.1616% ( 461) 00:10:21.273 9317.166 - 9369.806: 30.3999% ( 400) 00:10:21.273 9369.806 - 9422.445: 33.7921% ( 419) 00:10:21.273 9422.445 - 9475.084: 37.2247% ( 424) 00:10:21.273 9475.084 - 9527.724: 40.8922% ( 453) 00:10:21.273 9527.724 - 9580.363: 44.1062% ( 397) 00:10:21.273 9580.363 - 9633.002: 47.5874% ( 430) 00:10:21.273 9633.002 - 9685.642: 50.9067% ( 410) 00:10:21.273 9685.642 - 9738.281: 53.7646% ( 353) 00:10:21.273 9738.281 - 9790.920: 56.8329% ( 379) 00:10:21.273 9790.920 - 9843.560: 59.4398% ( 322) 00:10:21.273 9843.560 - 9896.199: 61.7552% ( 286) 00:10:21.273 9896.199 - 9948.839: 64.0301% ( 281) 00:10:21.273 9948.839 - 10001.478: 65.7950% ( 218) 00:10:21.273 10001.478 - 10054.117: 67.5356% ( 215) 00:10:21.273 10054.117 - 10106.757: 69.2762% ( 215) 00:10:21.273 10106.757 - 10159.396: 70.7011% ( 176) 00:10:21.273 10159.396 - 10212.035: 72.2474% ( 191) 00:10:21.273 10212.035 - 10264.675: 73.9475% ( 210) 00:10:21.273 10264.675 - 10317.314: 75.2915% ( 166) 00:10:21.273 10317.314 - 10369.953: 76.4815% ( 147) 00:10:21.273 10369.953 - 10422.593: 77.9469% ( 181) 00:10:21.273 10422.593 - 10475.232: 79.1208% ( 145) 00:10:21.273 10475.232 - 10527.871: 79.9061% ( 97) 00:10:21.273 10527.871 - 10580.511: 80.6671% ( 94) 00:10:21.273 10580.511 - 10633.150: 81.2338% ( 70) 00:10:21.273 10633.150 - 10685.790: 81.9139% ( 84) 00:10:21.273 10685.790 - 10738.429: 82.5049% ( 73) 00:10:21.273 10738.429 - 10791.068: 82.9582% ( 56) 00:10:21.273 10791.068 - 10843.708: 83.6464% ( 85) 00:10:21.273 10843.708 - 10896.347: 84.1726% ( 65) 00:10:21.273 10896.347 - 10948.986: 84.5369% ( 45) 00:10:21.273 10948.986 - 11001.626: 85.0712% ( 66) 00:10:21.273 11001.626 - 11054.265: 85.4760% ( 50) 00:10:21.273 11054.265 - 11106.904: 85.8080% ( 41) 00:10:21.273 11106.904 - 11159.544: 86.1966% ( 48) 00:10:21.273 11159.544 - 11212.183: 86.4394% ( 30) 00:10:21.273 11212.183 - 11264.822: 86.7390% ( 37) 00:10:21.273 11264.822 - 11317.462: 87.0223% ( 35) 00:10:21.273 11317.462 - 11370.101: 87.2005% ( 22) 00:10:21.273 11370.101 - 11422.741: 87.3705% ( 21) 00:10:21.273 11422.741 - 11475.380: 87.5567% ( 23) 00:10:21.273 11475.380 - 11528.019: 87.7510% ( 24) 00:10:21.273 11528.019 - 11580.659: 87.8724% ( 15) 00:10:21.273 11580.659 - 11633.298: 87.9938% ( 15) 00:10:21.273 11633.298 - 11685.937: 88.1801% ( 23) 00:10:21.273 11685.937 - 11738.577: 88.3258% ( 18) 00:10:21.273 11738.577 - 11791.216: 88.4310% ( 13) 00:10:21.273 11791.216 - 11843.855: 88.5282% ( 12) 00:10:21.273 11843.855 - 11896.495: 88.6820% ( 19) 00:10:21.273 11896.495 - 11949.134: 88.8520% ( 21) 00:10:21.273 11949.134 - 12001.773: 89.0463% ( 24) 00:10:21.274 12001.773 - 12054.413: 89.2163% ( 21) 00:10:21.274 12054.413 - 12107.052: 89.3297% ( 14) 00:10:21.274 12107.052 - 12159.692: 89.3782% ( 6) 00:10:21.274 12159.692 - 12212.331: 89.4592% ( 10) 00:10:21.274 12212.331 - 12264.970: 89.5644% ( 13) 00:10:21.274 12264.970 - 12317.610: 89.6697% ( 13) 00:10:21.274 12317.610 - 12370.249: 89.8397% ( 21) 00:10:21.274 12370.249 - 12422.888: 90.0502% ( 26) 00:10:21.274 12422.888 - 12475.528: 90.3093% ( 32) 00:10:21.274 12475.528 - 12528.167: 90.5198% ( 26) 00:10:21.274 12528.167 - 12580.806: 90.6817% ( 20) 00:10:21.274 12580.806 - 12633.446: 90.9812% ( 37) 00:10:21.274 12633.446 - 12686.085: 91.2889% ( 38) 00:10:21.274 12686.085 - 12738.724: 91.5398% ( 31) 00:10:21.274 12738.724 - 12791.364: 91.8475% ( 38) 00:10:21.274 12791.364 - 12844.003: 92.1065% ( 32) 00:10:21.274 12844.003 - 12896.643: 92.4223% ( 39) 00:10:21.274 12896.643 - 12949.282: 92.7785% ( 44) 00:10:21.274 12949.282 - 13001.921: 93.0295% ( 31) 00:10:21.274 13001.921 - 13054.561: 93.2400% ( 26) 00:10:21.274 13054.561 - 13107.200: 93.4100% ( 21) 00:10:21.274 13107.200 - 13159.839: 93.5476% ( 17) 00:10:21.274 13159.839 - 13212.479: 93.6528% ( 13) 00:10:21.274 13212.479 - 13265.118: 93.7257% ( 9) 00:10:21.274 13265.118 - 13317.757: 93.8310% ( 13) 00:10:21.274 13317.757 - 13370.397: 93.8795% ( 6) 00:10:21.274 13370.397 - 13423.036: 94.0010% ( 15) 00:10:21.274 13423.036 - 13475.676: 94.1872% ( 23) 00:10:21.274 13475.676 - 13580.954: 94.3734% ( 23) 00:10:21.274 13580.954 - 13686.233: 94.5677% ( 24) 00:10:21.274 13686.233 - 13791.512: 94.7377% ( 21) 00:10:21.274 13791.512 - 13896.790: 94.9077% ( 21) 00:10:21.274 13896.790 - 14002.069: 95.0453% ( 17) 00:10:21.274 14002.069 - 14107.348: 95.2963% ( 31) 00:10:21.274 14107.348 - 14212.627: 95.4582% ( 20) 00:10:21.274 14212.627 - 14317.905: 95.6768% ( 27) 00:10:21.274 14317.905 - 14423.184: 95.8549% ( 22) 00:10:21.274 14423.184 - 14528.463: 95.9278% ( 9) 00:10:21.274 14528.463 - 14633.741: 96.0249% ( 12) 00:10:21.274 14633.741 - 14739.020: 96.2192% ( 24) 00:10:21.274 14739.020 - 14844.299: 96.2840% ( 8) 00:10:21.274 14844.299 - 14949.578: 96.3569% ( 9) 00:10:21.274 14949.578 - 15054.856: 96.4378% ( 10) 00:10:21.274 15054.856 - 15160.135: 96.5188% ( 10) 00:10:21.274 15160.135 - 15265.414: 96.6726% ( 19) 00:10:21.274 15265.414 - 15370.692: 96.8345% ( 20) 00:10:21.274 15370.692 - 15475.971: 96.8912% ( 7) 00:10:21.274 16002.365 - 16107.643: 96.9074% ( 2) 00:10:21.274 16107.643 - 16212.922: 96.9883% ( 10) 00:10:21.274 16212.922 - 16318.201: 97.0450% ( 7) 00:10:21.274 16318.201 - 16423.480: 97.0693% ( 3) 00:10:21.274 16423.480 - 16528.758: 97.0936% ( 3) 00:10:21.274 16528.758 - 16634.037: 97.1260% ( 4) 00:10:21.274 16634.037 - 16739.316: 97.1665% ( 5) 00:10:21.274 16739.316 - 16844.594: 97.1988% ( 4) 00:10:21.274 16844.594 - 16949.873: 97.2636% ( 8) 00:10:21.274 16949.873 - 17055.152: 97.3850% ( 15) 00:10:21.274 17055.152 - 17160.431: 97.4984% ( 14) 00:10:21.274 17160.431 - 17265.709: 97.7170% ( 27) 00:10:21.274 17265.709 - 17370.988: 97.8303% ( 14) 00:10:21.274 17370.988 - 17476.267: 97.9113% ( 10) 00:10:21.274 17476.267 - 17581.545: 97.9275% ( 2) 00:10:21.274 19581.841 - 19687.120: 98.0003% ( 9) 00:10:21.274 19687.120 - 19792.398: 98.0813% ( 10) 00:10:21.274 19792.398 - 19897.677: 98.1946% ( 14) 00:10:21.274 19897.677 - 20002.956: 98.2837% ( 11) 00:10:21.274 20002.956 - 20108.235: 98.3889% ( 13) 00:10:21.274 20108.235 - 20213.513: 98.4294% ( 5) 00:10:21.274 20213.513 - 20318.792: 98.4861% ( 7) 00:10:21.274 20318.792 - 20424.071: 98.5589% ( 9) 00:10:21.274 20424.071 - 20529.349: 98.6237% ( 8) 00:10:21.274 20529.349 - 20634.628: 98.6966% ( 9) 00:10:21.274 20634.628 - 20739.907: 98.7775% ( 10) 00:10:21.274 20739.907 - 20845.186: 98.8423% ( 8) 00:10:21.274 20845.186 - 20950.464: 98.8828% ( 5) 00:10:21.274 20950.464 - 21055.743: 98.9233% ( 5) 00:10:21.274 21055.743 - 21161.022: 98.9637% ( 5) 00:10:21.274 25161.613 - 25266.892: 98.9718% ( 1) 00:10:21.274 25266.892 - 25372.170: 99.0042% ( 4) 00:10:21.274 25372.170 - 25477.449: 99.0366% ( 4) 00:10:21.274 25477.449 - 25582.728: 99.0690% ( 4) 00:10:21.274 25582.728 - 25688.006: 99.0933% ( 3) 00:10:21.274 25688.006 - 25793.285: 99.1337% ( 5) 00:10:21.274 25793.285 - 25898.564: 99.1661% ( 4) 00:10:21.274 25898.564 - 26003.843: 99.1904% ( 3) 00:10:21.274 26003.843 - 26109.121: 99.2309% ( 5) 00:10:21.274 26109.121 - 26214.400: 99.2633% ( 4) 00:10:21.274 26214.400 - 26319.679: 99.2957% ( 4) 00:10:21.274 26319.679 - 26424.957: 99.3280% ( 4) 00:10:21.274 26424.957 - 26530.236: 99.3604% ( 4) 00:10:21.274 26530.236 - 26635.515: 99.3847% ( 3) 00:10:21.274 26635.515 - 26740.794: 99.4171% ( 4) 00:10:21.274 26740.794 - 26846.072: 99.4495% ( 4) 00:10:21.274 26846.072 - 26951.351: 99.4819% ( 4) 00:10:21.274 33268.074 - 33478.631: 99.5304% ( 6) 00:10:21.274 33478.631 - 33689.189: 99.5952% ( 8) 00:10:21.274 33689.189 - 33899.746: 99.6438% ( 6) 00:10:21.274 33899.746 - 34110.304: 99.7085% ( 8) 00:10:21.274 34110.304 - 34320.861: 99.7652% ( 7) 00:10:21.274 34320.861 - 34531.418: 99.8381% ( 9) 00:10:21.274 34531.418 - 34741.976: 99.9028% ( 8) 00:10:21.274 34741.976 - 34952.533: 99.9676% ( 8) 00:10:21.274 34952.533 - 35163.091: 100.0000% ( 4) 00:10:21.274 00:10:21.274 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:21.274 ============================================================================== 00:10:21.274 Range in us Cumulative IO count 00:10:21.274 8106.461 - 8159.100: 0.0081% ( 1) 00:10:21.274 8159.100 - 8211.740: 0.0161% ( 1) 00:10:21.274 8211.740 - 8264.379: 0.0242% ( 1) 00:10:21.274 8317.018 - 8369.658: 0.0725% ( 6) 00:10:21.274 8369.658 - 8422.297: 0.2255% ( 19) 00:10:21.274 8422.297 - 8474.937: 0.4752% ( 31) 00:10:21.274 8474.937 - 8527.576: 0.8376% ( 45) 00:10:21.274 8527.576 - 8580.215: 1.2887% ( 56) 00:10:21.274 8580.215 - 8632.855: 1.6914% ( 50) 00:10:21.274 8632.855 - 8685.494: 2.4485% ( 94) 00:10:21.274 8685.494 - 8738.133: 3.0847% ( 79) 00:10:21.274 8738.133 - 8790.773: 3.7774% ( 86) 00:10:21.274 8790.773 - 8843.412: 4.4781% ( 87) 00:10:21.274 8843.412 - 8896.051: 5.4043% ( 115) 00:10:21.274 8896.051 - 8948.691: 6.8460% ( 179) 00:10:21.274 8948.691 - 9001.330: 8.8837% ( 253) 00:10:21.274 9001.330 - 9053.969: 11.5657% ( 333) 00:10:21.274 9053.969 - 9106.609: 14.1350% ( 319) 00:10:21.274 9106.609 - 9159.248: 16.7445% ( 324) 00:10:21.274 9159.248 - 9211.888: 20.2642% ( 437) 00:10:21.274 9211.888 - 9264.527: 23.4617% ( 397) 00:10:21.274 9264.527 - 9317.166: 26.5061% ( 378) 00:10:21.274 9317.166 - 9369.806: 30.1224% ( 449) 00:10:21.274 9369.806 - 9422.445: 33.7065% ( 445) 00:10:21.274 9422.445 - 9475.084: 37.2745% ( 443) 00:10:21.274 9475.084 - 9527.724: 40.3351% ( 380) 00:10:21.274 9527.724 - 9580.363: 43.4117% ( 382) 00:10:21.274 9580.363 - 9633.002: 46.8347% ( 425) 00:10:21.274 9633.002 - 9685.642: 50.0322% ( 397) 00:10:21.274 9685.642 - 9738.281: 53.6163% ( 445) 00:10:21.274 9738.281 - 9790.920: 56.2339% ( 325) 00:10:21.274 9790.920 - 9843.560: 59.4233% ( 396) 00:10:21.274 9843.560 - 9896.199: 62.0409% ( 325) 00:10:21.274 9896.199 - 9948.839: 64.5780% ( 315) 00:10:21.274 9948.839 - 10001.478: 66.1002% ( 189) 00:10:21.274 10001.478 - 10054.117: 67.6144% ( 188) 00:10:21.274 10054.117 - 10106.757: 69.0963% ( 184) 00:10:21.274 10106.757 - 10159.396: 70.6508% ( 193) 00:10:21.274 10159.396 - 10212.035: 72.3421% ( 210) 00:10:21.274 10212.035 - 10264.675: 73.7516% ( 175) 00:10:21.274 10264.675 - 10317.314: 75.4027% ( 205) 00:10:21.274 10317.314 - 10369.953: 76.4900% ( 135) 00:10:21.274 10369.953 - 10422.593: 77.9156% ( 177) 00:10:21.274 10422.593 - 10475.232: 78.8901% ( 121) 00:10:21.274 10475.232 - 10527.871: 79.8889% ( 124) 00:10:21.274 10527.871 - 10580.511: 80.6540% ( 95) 00:10:21.274 10580.511 - 10633.150: 81.2581% ( 75) 00:10:21.274 10633.150 - 10685.790: 81.9024% ( 80) 00:10:21.274 10685.790 - 10738.429: 82.5548% ( 81) 00:10:21.274 10738.429 - 10791.068: 82.8689% ( 39) 00:10:21.274 10791.068 - 10843.708: 83.3682% ( 62) 00:10:21.274 10843.708 - 10896.347: 83.9159% ( 68) 00:10:21.274 10896.347 - 10948.986: 84.2300% ( 39) 00:10:21.274 10948.986 - 11001.626: 84.8744% ( 80) 00:10:21.274 11001.626 - 11054.265: 85.2207% ( 43) 00:10:21.274 11054.265 - 11106.904: 85.4381% ( 27) 00:10:21.274 11106.904 - 11159.544: 85.6476% ( 26) 00:10:21.274 11159.544 - 11212.183: 85.9294% ( 35) 00:10:21.274 11212.183 - 11264.822: 86.0986% ( 21) 00:10:21.274 11264.822 - 11317.462: 86.1952% ( 12) 00:10:21.274 11317.462 - 11370.101: 86.2677% ( 9) 00:10:21.274 11370.101 - 11422.741: 86.3322% ( 8) 00:10:21.274 11422.741 - 11475.380: 86.5174% ( 23) 00:10:21.274 11475.380 - 11528.019: 86.7107% ( 24) 00:10:21.274 11528.019 - 11580.659: 86.9120% ( 25) 00:10:21.274 11580.659 - 11633.298: 87.0651% ( 19) 00:10:21.274 11633.298 - 11685.937: 87.2825% ( 27) 00:10:21.274 11685.937 - 11738.577: 87.4517% ( 21) 00:10:21.274 11738.577 - 11791.216: 87.5483% ( 12) 00:10:21.274 11791.216 - 11843.855: 87.6128% ( 8) 00:10:21.274 11843.855 - 11896.495: 87.7094% ( 12) 00:10:21.274 11896.495 - 11949.134: 87.7738% ( 8) 00:10:21.274 11949.134 - 12001.773: 87.8624% ( 11) 00:10:21.274 12001.773 - 12054.413: 88.0074% ( 18) 00:10:21.274 12054.413 - 12107.052: 88.2410% ( 29) 00:10:21.274 12107.052 - 12159.692: 88.4101% ( 21) 00:10:21.274 12159.692 - 12212.331: 88.6598% ( 31) 00:10:21.274 12212.331 - 12264.970: 88.9820% ( 40) 00:10:21.274 12264.970 - 12317.610: 89.2800% ( 37) 00:10:21.275 12317.610 - 12370.249: 89.5538% ( 34) 00:10:21.275 12370.249 - 12422.888: 89.9726% ( 52) 00:10:21.275 12422.888 - 12475.528: 90.2223% ( 31) 00:10:21.275 12475.528 - 12528.167: 90.4236% ( 25) 00:10:21.275 12528.167 - 12580.806: 90.6411% ( 27) 00:10:21.275 12580.806 - 12633.446: 90.8102% ( 21) 00:10:21.275 12633.446 - 12686.085: 90.9391% ( 16) 00:10:21.275 12686.085 - 12738.724: 91.0921% ( 19) 00:10:21.275 12738.724 - 12791.364: 91.3579% ( 33) 00:10:21.275 12791.364 - 12844.003: 91.5271% ( 21) 00:10:21.275 12844.003 - 12896.643: 91.7284% ( 25) 00:10:21.275 12896.643 - 12949.282: 91.9298% ( 25) 00:10:21.275 12949.282 - 13001.921: 92.1070% ( 22) 00:10:21.275 13001.921 - 13054.561: 92.4774% ( 46) 00:10:21.275 13054.561 - 13107.200: 92.7271% ( 31) 00:10:21.275 13107.200 - 13159.839: 92.8882% ( 20) 00:10:21.275 13159.839 - 13212.479: 93.0412% ( 19) 00:10:21.275 13212.479 - 13265.118: 93.2345% ( 24) 00:10:21.275 13265.118 - 13317.757: 93.3151% ( 10) 00:10:21.275 13317.757 - 13370.397: 93.3715% ( 7) 00:10:21.275 13370.397 - 13423.036: 93.4198% ( 6) 00:10:21.275 13423.036 - 13475.676: 93.5084% ( 11) 00:10:21.275 13475.676 - 13580.954: 93.7339% ( 28) 00:10:21.275 13580.954 - 13686.233: 93.9111% ( 22) 00:10:21.275 13686.233 - 13791.512: 94.2574% ( 43) 00:10:21.275 13791.512 - 13896.790: 94.4024% ( 18) 00:10:21.275 13896.790 - 14002.069: 94.5151% ( 14) 00:10:21.275 14002.069 - 14107.348: 94.7890% ( 34) 00:10:21.275 14107.348 - 14212.627: 95.2159% ( 53) 00:10:21.275 14212.627 - 14317.905: 95.4977% ( 35) 00:10:21.275 14317.905 - 14423.184: 95.6105% ( 14) 00:10:21.275 14423.184 - 14528.463: 95.8280% ( 27) 00:10:21.275 14528.463 - 14633.741: 96.2790% ( 56) 00:10:21.275 14633.741 - 14739.020: 96.5528% ( 34) 00:10:21.275 14739.020 - 14844.299: 96.7139% ( 20) 00:10:21.275 14844.299 - 14949.578: 96.7864% ( 9) 00:10:21.275 14949.578 - 15054.856: 96.8508% ( 8) 00:10:21.275 15054.856 - 15160.135: 96.8911% ( 5) 00:10:21.275 15160.135 - 15265.414: 96.9072% ( 2) 00:10:21.275 15370.692 - 15475.971: 97.0200% ( 14) 00:10:21.275 15475.971 - 15581.250: 97.0361% ( 2) 00:10:21.275 15581.250 - 15686.529: 97.0522% ( 2) 00:10:21.275 15686.529 - 15791.807: 97.0844% ( 4) 00:10:21.275 15791.807 - 15897.086: 97.1247% ( 5) 00:10:21.275 15897.086 - 16002.365: 97.1569% ( 4) 00:10:21.275 16002.365 - 16107.643: 97.1891% ( 4) 00:10:21.275 16107.643 - 16212.922: 97.2213% ( 4) 00:10:21.275 16212.922 - 16318.201: 97.2535% ( 4) 00:10:21.275 16318.201 - 16423.480: 97.2938% ( 5) 00:10:21.275 16423.480 - 16528.758: 97.3341% ( 5) 00:10:21.275 16528.758 - 16634.037: 97.3663% ( 4) 00:10:21.275 16634.037 - 16739.316: 97.4066% ( 5) 00:10:21.275 16739.316 - 16844.594: 97.4227% ( 2) 00:10:21.275 17160.431 - 17265.709: 97.4307% ( 1) 00:10:21.275 17265.709 - 17370.988: 97.4710% ( 5) 00:10:21.275 17370.988 - 17476.267: 97.5032% ( 4) 00:10:21.275 17476.267 - 17581.545: 97.5354% ( 4) 00:10:21.275 17581.545 - 17686.824: 97.5757% ( 5) 00:10:21.275 17686.824 - 17792.103: 97.6724% ( 12) 00:10:21.275 17792.103 - 17897.382: 97.7610% ( 11) 00:10:21.275 17897.382 - 18002.660: 98.0187% ( 32) 00:10:21.275 18002.660 - 18107.939: 98.1556% ( 17) 00:10:21.275 18107.939 - 18213.218: 98.2200% ( 8) 00:10:21.275 18213.218 - 18318.496: 98.2684% ( 6) 00:10:21.275 18318.496 - 18423.775: 98.3006% ( 4) 00:10:21.275 18423.775 - 18529.054: 98.3328% ( 4) 00:10:21.275 18529.054 - 18634.333: 98.3650% ( 4) 00:10:21.275 18634.333 - 18739.611: 98.3972% ( 4) 00:10:21.275 18739.611 - 18844.890: 98.4375% ( 5) 00:10:21.275 18844.890 - 18950.169: 98.4617% ( 3) 00:10:21.275 18950.169 - 19055.447: 98.4697% ( 1) 00:10:21.275 19055.447 - 19160.726: 98.5422% ( 9) 00:10:21.275 19160.726 - 19266.005: 98.5744% ( 4) 00:10:21.275 19266.005 - 19371.284: 98.6066% ( 4) 00:10:21.275 19371.284 - 19476.562: 98.6389% ( 4) 00:10:21.275 19476.562 - 19581.841: 98.6711% ( 4) 00:10:21.275 19581.841 - 19687.120: 98.7274% ( 7) 00:10:21.275 19687.120 - 19792.398: 98.8322% ( 13) 00:10:21.275 19792.398 - 19897.677: 98.8885% ( 7) 00:10:21.275 19897.677 - 20002.956: 98.9610% ( 9) 00:10:21.275 20002.956 - 20108.235: 99.0255% ( 8) 00:10:21.275 20108.235 - 20213.513: 99.0979% ( 9) 00:10:21.275 20213.513 - 20318.792: 99.1785% ( 10) 00:10:21.275 20318.792 - 20424.071: 99.2429% ( 8) 00:10:21.275 20424.071 - 20529.349: 99.2912% ( 6) 00:10:21.275 20529.349 - 20634.628: 99.3235% ( 4) 00:10:21.275 20634.628 - 20739.907: 99.3557% ( 4) 00:10:21.275 20739.907 - 20845.186: 99.3798% ( 3) 00:10:21.275 20845.186 - 20950.464: 99.4201% ( 5) 00:10:21.275 20950.464 - 21055.743: 99.4604% ( 5) 00:10:21.275 21055.743 - 21161.022: 99.4845% ( 3) 00:10:21.275 24740.498 - 24845.777: 99.5087% ( 3) 00:10:21.275 24845.777 - 24951.055: 99.5329% ( 3) 00:10:21.275 24951.055 - 25056.334: 99.5731% ( 5) 00:10:21.275 25056.334 - 25161.613: 99.5973% ( 3) 00:10:21.275 25161.613 - 25266.892: 99.6295% ( 4) 00:10:21.275 25266.892 - 25372.170: 99.6617% ( 4) 00:10:21.275 25372.170 - 25477.449: 99.6939% ( 4) 00:10:21.275 25477.449 - 25582.728: 99.7262% ( 4) 00:10:21.275 25582.728 - 25688.006: 99.7584% ( 4) 00:10:21.275 25688.006 - 25793.285: 99.7906% ( 4) 00:10:21.275 25793.285 - 25898.564: 99.8228% ( 4) 00:10:21.275 25898.564 - 26003.843: 99.8550% ( 4) 00:10:21.275 26003.843 - 26109.121: 99.8872% ( 4) 00:10:21.275 26109.121 - 26214.400: 99.9195% ( 4) 00:10:21.275 26214.400 - 26319.679: 99.9517% ( 4) 00:10:21.275 26319.679 - 26424.957: 99.9839% ( 4) 00:10:21.275 26424.957 - 26530.236: 100.0000% ( 2) 00:10:21.275 00:10:21.275 16:04:39 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:21.275 00:10:21.275 real 0m2.691s 00:10:21.275 user 0m2.293s 00:10:21.275 sys 0m0.298s 00:10:21.275 16:04:39 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:21.275 ************************************ 00:10:21.275 END TEST nvme_perf 00:10:21.275 16:04:39 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:21.275 ************************************ 00:10:21.275 16:04:39 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:21.275 16:04:39 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:21.275 16:04:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:21.275 16:04:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:21.275 ************************************ 00:10:21.275 START TEST nvme_hello_world 00:10:21.275 ************************************ 00:10:21.275 16:04:39 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:21.534 Initializing NVMe Controllers 00:10:21.534 Attached to 0000:00:10.0 00:10:21.534 Namespace ID: 1 size: 6GB 00:10:21.534 Attached to 0000:00:11.0 00:10:21.534 Namespace ID: 1 size: 5GB 00:10:21.534 Attached to 0000:00:13.0 00:10:21.534 Namespace ID: 1 size: 1GB 00:10:21.534 Attached to 0000:00:12.0 00:10:21.534 Namespace ID: 1 size: 4GB 00:10:21.534 Namespace ID: 2 size: 4GB 00:10:21.534 Namespace ID: 3 size: 4GB 00:10:21.534 Initialization complete. 00:10:21.534 INFO: using host memory buffer for IO 00:10:21.534 Hello world! 00:10:21.534 INFO: using host memory buffer for IO 00:10:21.534 Hello world! 00:10:21.534 INFO: using host memory buffer for IO 00:10:21.534 Hello world! 00:10:21.534 INFO: using host memory buffer for IO 00:10:21.534 Hello world! 00:10:21.534 INFO: using host memory buffer for IO 00:10:21.534 Hello world! 00:10:21.534 INFO: using host memory buffer for IO 00:10:21.534 Hello world! 00:10:21.534 00:10:21.534 real 0m0.305s 00:10:21.534 user 0m0.117s 00:10:21.534 sys 0m0.147s 00:10:21.534 16:04:40 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:21.534 16:04:40 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:21.534 ************************************ 00:10:21.534 END TEST nvme_hello_world 00:10:21.534 ************************************ 00:10:21.534 16:04:40 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:21.534 16:04:40 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:21.534 16:04:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:21.534 16:04:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:21.534 ************************************ 00:10:21.534 START TEST nvme_sgl 00:10:21.534 ************************************ 00:10:21.534 16:04:40 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:21.792 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:21.792 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:21.792 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:22.050 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:22.050 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:22.050 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:22.050 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:22.050 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:22.050 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:22.050 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:22.050 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:22.050 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:22.050 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:22.050 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:22.050 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:22.050 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:22.050 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:22.050 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:22.050 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:22.050 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:22.050 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:22.050 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:22.050 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:22.050 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:22.050 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:22.050 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:22.050 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:22.050 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:22.050 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:22.050 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:22.050 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:22.050 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:22.050 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:22.050 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:22.050 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:22.050 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:22.050 NVMe Readv/Writev Request test 00:10:22.050 Attached to 0000:00:10.0 00:10:22.050 Attached to 0000:00:11.0 00:10:22.050 Attached to 0000:00:13.0 00:10:22.050 Attached to 0000:00:12.0 00:10:22.050 0000:00:10.0: build_io_request_2 test passed 00:10:22.050 0000:00:10.0: build_io_request_4 test passed 00:10:22.050 0000:00:10.0: build_io_request_5 test passed 00:10:22.050 0000:00:10.0: build_io_request_6 test passed 00:10:22.050 0000:00:10.0: build_io_request_7 test passed 00:10:22.050 0000:00:10.0: build_io_request_10 test passed 00:10:22.050 0000:00:11.0: build_io_request_2 test passed 00:10:22.050 0000:00:11.0: build_io_request_4 test passed 00:10:22.050 0000:00:11.0: build_io_request_5 test passed 00:10:22.050 0000:00:11.0: build_io_request_6 test passed 00:10:22.050 0000:00:11.0: build_io_request_7 test passed 00:10:22.050 0000:00:11.0: build_io_request_10 test passed 00:10:22.050 Cleaning up... 00:10:22.050 00:10:22.050 real 0m0.376s 00:10:22.050 user 0m0.179s 00:10:22.050 sys 0m0.153s 00:10:22.050 16:04:40 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:22.050 ************************************ 00:10:22.050 END TEST nvme_sgl 00:10:22.050 ************************************ 00:10:22.050 16:04:40 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:22.050 16:04:40 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:22.050 16:04:40 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:22.050 16:04:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:22.050 16:04:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:22.050 ************************************ 00:10:22.050 START TEST nvme_e2edp 00:10:22.050 ************************************ 00:10:22.050 16:04:40 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:22.309 NVMe Write/Read with End-to-End data protection test 00:10:22.309 Attached to 0000:00:10.0 00:10:22.309 Attached to 0000:00:11.0 00:10:22.309 Attached to 0000:00:13.0 00:10:22.309 Attached to 0000:00:12.0 00:10:22.309 Cleaning up... 00:10:22.309 00:10:22.309 real 0m0.312s 00:10:22.309 user 0m0.102s 00:10:22.309 sys 0m0.157s 00:10:22.309 16:04:40 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:22.309 16:04:40 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:22.309 ************************************ 00:10:22.309 END TEST nvme_e2edp 00:10:22.309 ************************************ 00:10:22.568 16:04:41 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:22.568 16:04:41 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:22.568 16:04:41 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:22.568 16:04:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:22.568 ************************************ 00:10:22.568 START TEST nvme_reserve 00:10:22.568 ************************************ 00:10:22.568 16:04:41 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:22.827 ===================================================== 00:10:22.827 NVMe Controller at PCI bus 0, device 16, function 0 00:10:22.827 ===================================================== 00:10:22.827 Reservations: Not Supported 00:10:22.827 ===================================================== 00:10:22.827 NVMe Controller at PCI bus 0, device 17, function 0 00:10:22.827 ===================================================== 00:10:22.827 Reservations: Not Supported 00:10:22.827 ===================================================== 00:10:22.827 NVMe Controller at PCI bus 0, device 19, function 0 00:10:22.827 ===================================================== 00:10:22.827 Reservations: Not Supported 00:10:22.827 ===================================================== 00:10:22.827 NVMe Controller at PCI bus 0, device 18, function 0 00:10:22.827 ===================================================== 00:10:22.827 Reservations: Not Supported 00:10:22.827 Reservation test passed 00:10:22.827 00:10:22.827 real 0m0.296s 00:10:22.827 user 0m0.107s 00:10:22.827 sys 0m0.146s 00:10:22.827 16:04:41 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:22.827 ************************************ 00:10:22.827 END TEST nvme_reserve 00:10:22.827 ************************************ 00:10:22.827 16:04:41 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:22.827 16:04:41 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:22.827 16:04:41 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:22.827 16:04:41 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:22.827 16:04:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:22.827 ************************************ 00:10:22.827 START TEST nvme_err_injection 00:10:22.827 ************************************ 00:10:22.827 16:04:41 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:23.086 NVMe Error Injection test 00:10:23.086 Attached to 0000:00:10.0 00:10:23.086 Attached to 0000:00:11.0 00:10:23.086 Attached to 0000:00:13.0 00:10:23.086 Attached to 0000:00:12.0 00:10:23.086 0000:00:10.0: get features failed as expected 00:10:23.086 0000:00:11.0: get features failed as expected 00:10:23.086 0000:00:13.0: get features failed as expected 00:10:23.086 0000:00:12.0: get features failed as expected 00:10:23.086 0000:00:13.0: get features successfully as expected 00:10:23.086 0000:00:12.0: get features successfully as expected 00:10:23.086 0000:00:10.0: get features successfully as expected 00:10:23.086 0000:00:11.0: get features successfully as expected 00:10:23.086 0000:00:10.0: read failed as expected 00:10:23.086 0000:00:11.0: read failed as expected 00:10:23.086 0000:00:13.0: read failed as expected 00:10:23.086 0000:00:12.0: read failed as expected 00:10:23.086 0000:00:10.0: read successfully as expected 00:10:23.086 0000:00:11.0: read successfully as expected 00:10:23.086 0000:00:13.0: read successfully as expected 00:10:23.086 0000:00:12.0: read successfully as expected 00:10:23.086 Cleaning up... 00:10:23.086 00:10:23.086 real 0m0.294s 00:10:23.086 user 0m0.107s 00:10:23.086 sys 0m0.146s 00:10:23.086 16:04:41 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.086 16:04:41 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:23.086 ************************************ 00:10:23.086 END TEST nvme_err_injection 00:10:23.086 ************************************ 00:10:23.086 16:04:41 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:23.086 16:04:41 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:10:23.086 16:04:41 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:23.086 16:04:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.086 ************************************ 00:10:23.086 START TEST nvme_overhead 00:10:23.086 ************************************ 00:10:23.086 16:04:41 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:24.469 Initializing NVMe Controllers 00:10:24.469 Attached to 0000:00:10.0 00:10:24.469 Attached to 0000:00:11.0 00:10:24.469 Attached to 0000:00:13.0 00:10:24.469 Attached to 0000:00:12.0 00:10:24.469 Initialization complete. Launching workers. 00:10:24.469 submit (in ns) avg, min, max = 13501.3, 10771.9, 101366.3 00:10:24.469 complete (in ns) avg, min, max = 8643.0, 7793.6, 67081.1 00:10:24.469 00:10:24.469 Submit histogram 00:10:24.469 ================ 00:10:24.469 Range in us Cumulative Count 00:10:24.469 10.744 - 10.795: 0.0168% ( 1) 00:10:24.469 11.206 - 11.258: 0.0336% ( 1) 00:10:24.469 11.772 - 11.823: 0.0503% ( 1) 00:10:24.469 11.875 - 11.926: 0.0839% ( 2) 00:10:24.469 11.926 - 11.978: 0.1342% ( 3) 00:10:24.469 11.978 - 12.029: 0.2181% ( 5) 00:10:24.469 12.029 - 12.080: 0.2685% ( 3) 00:10:24.469 12.080 - 12.132: 0.4195% ( 9) 00:10:24.469 12.132 - 12.183: 0.4698% ( 3) 00:10:24.469 12.183 - 12.235: 0.5705% ( 6) 00:10:24.469 12.235 - 12.286: 0.6376% ( 4) 00:10:24.469 12.286 - 12.337: 0.8725% ( 14) 00:10:24.469 12.337 - 12.389: 1.5940% ( 43) 00:10:24.469 12.389 - 12.440: 3.1376% ( 92) 00:10:24.469 12.440 - 12.492: 5.6040% ( 147) 00:10:24.469 12.492 - 12.543: 8.4899% ( 172) 00:10:24.469 12.543 - 12.594: 11.7450% ( 194) 00:10:24.469 12.594 - 12.646: 15.4698% ( 222) 00:10:24.469 12.646 - 12.697: 18.2718% ( 167) 00:10:24.469 12.697 - 12.749: 21.1242% ( 170) 00:10:24.469 12.749 - 12.800: 24.3456% ( 192) 00:10:24.469 12.800 - 12.851: 27.4329% ( 184) 00:10:24.469 12.851 - 12.903: 30.7215% ( 196) 00:10:24.469 12.903 - 12.954: 34.6141% ( 232) 00:10:24.469 12.954 - 13.006: 39.4799% ( 290) 00:10:24.469 13.006 - 13.057: 45.6040% ( 365) 00:10:24.469 13.057 - 13.108: 51.3087% ( 340) 00:10:24.469 13.108 - 13.160: 56.8792% ( 332) 00:10:24.469 13.160 - 13.263: 66.3423% ( 564) 00:10:24.469 13.263 - 13.365: 74.7483% ( 501) 00:10:24.469 13.365 - 13.468: 81.1074% ( 379) 00:10:24.469 13.468 - 13.571: 86.1745% ( 302) 00:10:24.469 13.571 - 13.674: 89.3792% ( 191) 00:10:24.469 13.674 - 13.777: 91.2752% ( 113) 00:10:24.469 13.777 - 13.880: 92.4161% ( 68) 00:10:24.469 13.880 - 13.982: 92.9866% ( 34) 00:10:24.469 13.982 - 14.085: 93.3221% ( 20) 00:10:24.469 14.085 - 14.188: 93.5570% ( 14) 00:10:24.469 14.188 - 14.291: 93.7081% ( 9) 00:10:24.469 14.291 - 14.394: 93.7752% ( 4) 00:10:24.469 14.496 - 14.599: 93.8255% ( 3) 00:10:24.469 14.702 - 14.805: 93.8591% ( 2) 00:10:24.469 14.805 - 14.908: 93.8758% ( 1) 00:10:24.469 14.908 - 15.010: 93.9262% ( 3) 00:10:24.469 15.010 - 15.113: 93.9430% ( 1) 00:10:24.469 15.216 - 15.319: 93.9597% ( 1) 00:10:24.469 15.319 - 15.422: 93.9765% ( 1) 00:10:24.469 15.422 - 15.524: 94.0101% ( 2) 00:10:24.469 15.524 - 15.627: 94.0436% ( 2) 00:10:24.469 15.936 - 16.039: 94.0604% ( 1) 00:10:24.469 16.141 - 16.244: 94.0772% ( 1) 00:10:24.469 16.244 - 16.347: 94.1611% ( 5) 00:10:24.469 16.347 - 16.450: 94.2953% ( 8) 00:10:24.469 16.450 - 16.553: 94.3960% ( 6) 00:10:24.469 16.553 - 16.655: 94.5134% ( 7) 00:10:24.469 16.655 - 16.758: 94.6309% ( 7) 00:10:24.469 16.758 - 16.861: 94.7651% ( 8) 00:10:24.469 16.861 - 16.964: 94.9664% ( 12) 00:10:24.469 16.964 - 17.067: 95.1510% ( 11) 00:10:24.469 17.067 - 17.169: 95.4027% ( 15) 00:10:24.469 17.169 - 17.272: 95.6711% ( 16) 00:10:24.469 17.272 - 17.375: 95.8557% ( 11) 00:10:24.469 17.375 - 17.478: 96.0067% ( 9) 00:10:24.469 17.478 - 17.581: 96.1745% ( 10) 00:10:24.469 17.581 - 17.684: 96.2752% ( 6) 00:10:24.469 17.684 - 17.786: 96.4094% ( 8) 00:10:24.469 17.786 - 17.889: 96.6107% ( 12) 00:10:24.469 17.889 - 17.992: 96.6443% ( 2) 00:10:24.469 17.992 - 18.095: 96.7617% ( 7) 00:10:24.469 18.095 - 18.198: 96.8792% ( 7) 00:10:24.469 18.198 - 18.300: 96.9631% ( 5) 00:10:24.469 18.300 - 18.403: 97.0302% ( 4) 00:10:24.469 18.403 - 18.506: 97.1477% ( 7) 00:10:24.469 18.506 - 18.609: 97.2987% ( 9) 00:10:24.469 18.609 - 18.712: 97.4161% ( 7) 00:10:24.469 18.712 - 18.814: 97.5671% ( 9) 00:10:24.469 18.814 - 18.917: 97.7181% ( 9) 00:10:24.469 18.917 - 19.020: 97.9195% ( 12) 00:10:24.469 19.020 - 19.123: 97.9866% ( 4) 00:10:24.469 19.123 - 19.226: 98.0705% ( 5) 00:10:24.469 19.226 - 19.329: 98.1544% ( 5) 00:10:24.469 19.329 - 19.431: 98.1879% ( 2) 00:10:24.469 19.431 - 19.534: 98.3221% ( 8) 00:10:24.469 19.534 - 19.637: 98.4228% ( 6) 00:10:24.469 19.637 - 19.740: 98.4899% ( 4) 00:10:24.469 19.740 - 19.843: 98.5906% ( 6) 00:10:24.469 19.843 - 19.945: 98.7081% ( 7) 00:10:24.469 19.945 - 20.048: 98.8087% ( 6) 00:10:24.469 20.048 - 20.151: 98.8591% ( 3) 00:10:24.469 20.151 - 20.254: 98.9262% ( 4) 00:10:24.469 20.254 - 20.357: 98.9597% ( 2) 00:10:24.469 20.357 - 20.459: 98.9933% ( 2) 00:10:24.469 20.459 - 20.562: 99.0436% ( 3) 00:10:24.469 20.562 - 20.665: 99.0772% ( 2) 00:10:24.469 20.768 - 20.871: 99.0940% ( 1) 00:10:24.470 20.871 - 20.973: 99.1107% ( 1) 00:10:24.470 21.076 - 21.179: 99.1275% ( 1) 00:10:24.470 21.282 - 21.385: 99.1443% ( 1) 00:10:24.470 21.385 - 21.488: 99.1779% ( 2) 00:10:24.470 21.488 - 21.590: 99.1946% ( 1) 00:10:24.470 21.693 - 21.796: 99.2282% ( 2) 00:10:24.470 21.796 - 21.899: 99.2617% ( 2) 00:10:24.470 22.104 - 22.207: 99.3121% ( 3) 00:10:24.470 22.207 - 22.310: 99.3289% ( 1) 00:10:24.470 22.310 - 22.413: 99.3456% ( 1) 00:10:24.470 22.516 - 22.618: 99.3624% ( 1) 00:10:24.470 22.824 - 22.927: 99.3792% ( 1) 00:10:24.470 22.927 - 23.030: 99.3960% ( 1) 00:10:24.470 23.133 - 23.235: 99.4128% ( 1) 00:10:24.470 23.338 - 23.441: 99.4463% ( 2) 00:10:24.470 23.544 - 23.647: 99.4631% ( 1) 00:10:24.470 23.647 - 23.749: 99.4966% ( 2) 00:10:24.470 23.749 - 23.852: 99.5302% ( 2) 00:10:24.470 23.852 - 23.955: 99.5470% ( 1) 00:10:24.470 24.366 - 24.469: 99.5638% ( 1) 00:10:24.470 25.497 - 25.600: 99.5805% ( 1) 00:10:24.470 26.217 - 26.320: 99.5973% ( 1) 00:10:24.470 26.320 - 26.525: 99.6141% ( 1) 00:10:24.470 26.731 - 26.937: 99.6309% ( 1) 00:10:24.470 26.937 - 27.142: 99.6477% ( 1) 00:10:24.470 27.142 - 27.348: 99.6644% ( 1) 00:10:24.470 27.348 - 27.553: 99.6812% ( 1) 00:10:24.470 28.787 - 28.993: 99.6980% ( 1) 00:10:24.470 29.815 - 30.021: 99.7148% ( 1) 00:10:24.470 30.843 - 31.049: 99.7315% ( 1) 00:10:24.470 32.694 - 32.900: 99.7483% ( 1) 00:10:24.470 36.395 - 36.601: 99.7651% ( 1) 00:10:24.470 37.012 - 37.218: 99.7819% ( 1) 00:10:24.470 38.246 - 38.451: 99.7987% ( 1) 00:10:24.470 39.068 - 39.274: 99.8154% ( 1) 00:10:24.470 39.891 - 40.096: 99.8322% ( 1) 00:10:24.470 45.854 - 46.059: 99.8490% ( 1) 00:10:24.470 46.882 - 47.088: 99.8658% ( 1) 00:10:24.470 47.293 - 47.499: 99.8826% ( 1) 00:10:24.470 52.228 - 52.434: 99.8993% ( 1) 00:10:24.470 62.509 - 62.920: 99.9161% ( 1) 00:10:24.470 66.210 - 66.622: 99.9329% ( 1) 00:10:24.470 72.790 - 73.202: 99.9497% ( 1) 00:10:24.470 74.435 - 74.847: 99.9664% ( 1) 00:10:24.470 89.240 - 89.651: 99.9832% ( 1) 00:10:24.470 101.166 - 101.578: 100.0000% ( 1) 00:10:24.470 00:10:24.470 Complete histogram 00:10:24.470 ================== 00:10:24.470 Range in us Cumulative Count 00:10:24.470 7.762 - 7.814: 0.0503% ( 3) 00:10:24.470 7.814 - 7.865: 0.9060% ( 51) 00:10:24.470 7.865 - 7.916: 5.4195% ( 269) 00:10:24.470 7.916 - 7.968: 15.1510% ( 580) 00:10:24.470 7.968 - 8.019: 28.9430% ( 822) 00:10:24.470 8.019 - 8.071: 40.4362% ( 685) 00:10:24.470 8.071 - 8.122: 48.0034% ( 451) 00:10:24.470 8.122 - 8.173: 52.3826% ( 261) 00:10:24.470 8.173 - 8.225: 54.6309% ( 134) 00:10:24.470 8.225 - 8.276: 55.8725% ( 74) 00:10:24.470 8.276 - 8.328: 56.4933% ( 37) 00:10:24.470 8.328 - 8.379: 56.9631% ( 28) 00:10:24.470 8.379 - 8.431: 57.2987% ( 20) 00:10:24.470 8.431 - 8.482: 57.5000% ( 12) 00:10:24.470 8.482 - 8.533: 57.6846% ( 11) 00:10:24.470 8.533 - 8.585: 58.0369% ( 21) 00:10:24.470 8.585 - 8.636: 58.7919% ( 45) 00:10:24.470 8.636 - 8.688: 60.1342% ( 80) 00:10:24.470 8.688 - 8.739: 61.5772% ( 86) 00:10:24.470 8.739 - 8.790: 63.3557% ( 106) 00:10:24.470 8.790 - 8.842: 65.7550% ( 143) 00:10:24.470 8.842 - 8.893: 69.1779% ( 204) 00:10:24.470 8.893 - 8.945: 73.3389% ( 248) 00:10:24.470 8.945 - 8.996: 77.4329% ( 244) 00:10:24.470 8.996 - 9.047: 80.7047% ( 195) 00:10:24.470 9.047 - 9.099: 84.3960% ( 220) 00:10:24.470 9.099 - 9.150: 87.5168% ( 186) 00:10:24.470 9.150 - 9.202: 90.2852% ( 165) 00:10:24.470 9.202 - 9.253: 92.2651% ( 118) 00:10:24.470 9.253 - 9.304: 93.6409% ( 82) 00:10:24.470 9.304 - 9.356: 94.7819% ( 68) 00:10:24.470 9.356 - 9.407: 95.5705% ( 47) 00:10:24.470 9.407 - 9.459: 96.1913% ( 37) 00:10:24.470 9.459 - 9.510: 96.5772% ( 23) 00:10:24.470 9.510 - 9.561: 96.8121% ( 14) 00:10:24.470 9.561 - 9.613: 96.9631% ( 9) 00:10:24.470 9.613 - 9.664: 97.0973% ( 8) 00:10:24.470 9.664 - 9.716: 97.1980% ( 6) 00:10:24.470 9.716 - 9.767: 97.2483% ( 3) 00:10:24.470 9.767 - 9.818: 97.2987% ( 3) 00:10:24.470 9.818 - 9.870: 97.3154% ( 1) 00:10:24.470 9.870 - 9.921: 97.3490% ( 2) 00:10:24.470 9.973 - 10.024: 97.3658% ( 1) 00:10:24.470 10.024 - 10.076: 97.3826% ( 1) 00:10:24.470 10.076 - 10.127: 97.3993% ( 1) 00:10:24.470 10.178 - 10.230: 97.4161% ( 1) 00:10:24.470 10.384 - 10.435: 97.4497% ( 2) 00:10:24.470 10.487 - 10.538: 97.4664% ( 1) 00:10:24.470 10.538 - 10.590: 97.4832% ( 1) 00:10:24.470 10.590 - 10.641: 97.5000% ( 1) 00:10:24.470 10.692 - 10.744: 97.5168% ( 1) 00:10:24.470 10.744 - 10.795: 97.5503% ( 2) 00:10:24.470 10.847 - 10.898: 97.5839% ( 2) 00:10:24.470 11.052 - 11.104: 97.6007% ( 1) 00:10:24.470 11.206 - 11.258: 97.6174% ( 1) 00:10:24.470 11.309 - 11.361: 97.6510% ( 2) 00:10:24.470 11.361 - 11.412: 97.6846% ( 2) 00:10:24.470 11.412 - 11.463: 97.7013% ( 1) 00:10:24.470 11.463 - 11.515: 97.7349% ( 2) 00:10:24.470 11.618 - 11.669: 97.7517% ( 1) 00:10:24.470 11.720 - 11.772: 97.7685% ( 1) 00:10:24.470 11.823 - 11.875: 97.7852% ( 1) 00:10:24.470 11.875 - 11.926: 97.8020% ( 1) 00:10:24.470 12.183 - 12.235: 97.8188% ( 1) 00:10:24.470 12.286 - 12.337: 97.8356% ( 1) 00:10:24.470 12.492 - 12.543: 97.8523% ( 1) 00:10:24.470 12.543 - 12.594: 97.8859% ( 2) 00:10:24.470 12.646 - 12.697: 97.9195% ( 2) 00:10:24.470 12.697 - 12.749: 97.9362% ( 1) 00:10:24.470 12.749 - 12.800: 97.9530% ( 1) 00:10:24.470 12.800 - 12.851: 97.9698% ( 1) 00:10:24.470 12.851 - 12.903: 98.0034% ( 2) 00:10:24.470 12.954 - 13.006: 98.0201% ( 1) 00:10:24.470 13.057 - 13.108: 98.0369% ( 1) 00:10:24.470 13.160 - 13.263: 98.1376% ( 6) 00:10:24.470 13.263 - 13.365: 98.1711% ( 2) 00:10:24.470 13.365 - 13.468: 98.3221% ( 9) 00:10:24.470 13.468 - 13.571: 98.3725% ( 3) 00:10:24.470 13.571 - 13.674: 98.4060% ( 2) 00:10:24.470 13.674 - 13.777: 98.4732% ( 4) 00:10:24.470 13.777 - 13.880: 98.5235% ( 3) 00:10:24.470 13.880 - 13.982: 98.5403% ( 1) 00:10:24.470 13.982 - 14.085: 98.5570% ( 1) 00:10:24.470 14.085 - 14.188: 98.6745% ( 7) 00:10:24.470 14.188 - 14.291: 98.7081% ( 2) 00:10:24.470 14.291 - 14.394: 98.7584% ( 3) 00:10:24.470 14.394 - 14.496: 98.8255% ( 4) 00:10:24.470 14.599 - 14.702: 98.9094% ( 5) 00:10:24.470 14.702 - 14.805: 98.9765% ( 4) 00:10:24.470 14.805 - 14.908: 99.0101% ( 2) 00:10:24.470 14.908 - 15.010: 99.0772% ( 4) 00:10:24.470 15.010 - 15.113: 99.1107% ( 2) 00:10:24.470 15.113 - 15.216: 99.1443% ( 2) 00:10:24.470 15.216 - 15.319: 99.2450% ( 6) 00:10:24.470 15.319 - 15.422: 99.2953% ( 3) 00:10:24.470 15.422 - 15.524: 99.3289% ( 2) 00:10:24.470 15.524 - 15.627: 99.3624% ( 2) 00:10:24.470 15.627 - 15.730: 99.3792% ( 1) 00:10:24.470 15.730 - 15.833: 99.4128% ( 2) 00:10:24.470 15.833 - 15.936: 99.4295% ( 1) 00:10:24.470 16.553 - 16.655: 99.4463% ( 1) 00:10:24.470 16.758 - 16.861: 99.4631% ( 1) 00:10:24.470 16.964 - 17.067: 99.4966% ( 2) 00:10:24.470 17.272 - 17.375: 99.5134% ( 1) 00:10:24.470 19.020 - 19.123: 99.5302% ( 1) 00:10:24.470 19.123 - 19.226: 99.5470% ( 1) 00:10:24.470 19.329 - 19.431: 99.5638% ( 1) 00:10:24.470 19.431 - 19.534: 99.5805% ( 1) 00:10:24.470 19.534 - 19.637: 99.6141% ( 2) 00:10:24.470 19.843 - 19.945: 99.6477% ( 2) 00:10:24.470 21.282 - 21.385: 99.6644% ( 1) 00:10:24.470 21.385 - 21.488: 99.6812% ( 1) 00:10:24.470 23.030 - 23.133: 99.6980% ( 1) 00:10:24.470 25.394 - 25.497: 99.7148% ( 1) 00:10:24.470 25.497 - 25.600: 99.7315% ( 1) 00:10:24.470 25.703 - 25.806: 99.7483% ( 1) 00:10:24.470 26.114 - 26.217: 99.7651% ( 1) 00:10:24.470 27.348 - 27.553: 99.7987% ( 2) 00:10:24.470 27.553 - 27.759: 99.8154% ( 1) 00:10:24.470 27.965 - 28.170: 99.8490% ( 2) 00:10:24.471 28.376 - 28.582: 99.8658% ( 1) 00:10:24.471 30.638 - 30.843: 99.8826% ( 1) 00:10:24.471 30.843 - 31.049: 99.8993% ( 1) 00:10:24.471 34.545 - 34.750: 99.9161% ( 1) 00:10:24.471 35.573 - 35.778: 99.9329% ( 1) 00:10:24.471 36.190 - 36.395: 99.9497% ( 1) 00:10:24.471 37.629 - 37.835: 99.9664% ( 1) 00:10:24.471 62.920 - 63.332: 99.9832% ( 1) 00:10:24.471 67.033 - 67.444: 100.0000% ( 1) 00:10:24.471 00:10:24.471 00:10:24.471 real 0m1.315s 00:10:24.471 user 0m1.096s 00:10:24.471 sys 0m0.166s 00:10:24.471 16:04:43 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:24.471 16:04:43 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:24.471 ************************************ 00:10:24.471 END TEST nvme_overhead 00:10:24.471 ************************************ 00:10:24.471 16:04:43 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:24.471 16:04:43 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:10:24.471 16:04:43 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:24.471 16:04:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:24.471 ************************************ 00:10:24.471 START TEST nvme_arbitration 00:10:24.471 ************************************ 00:10:24.471 16:04:43 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:28.658 Initializing NVMe Controllers 00:10:28.658 Attached to 0000:00:10.0 00:10:28.659 Attached to 0000:00:11.0 00:10:28.659 Attached to 0000:00:13.0 00:10:28.659 Attached to 0000:00:12.0 00:10:28.659 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:28.659 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:28.659 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:28.659 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:28.659 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:28.659 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:28.659 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:28.659 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:28.659 Initialization complete. Launching workers. 00:10:28.659 Starting thread on core 1 with urgent priority queue 00:10:28.659 Starting thread on core 2 with urgent priority queue 00:10:28.659 Starting thread on core 3 with urgent priority queue 00:10:28.659 Starting thread on core 0 with urgent priority queue 00:10:28.659 QEMU NVMe Ctrl (12340 ) core 0: 448.00 IO/s 223.21 secs/100000 ios 00:10:28.659 QEMU NVMe Ctrl (12342 ) core 0: 448.00 IO/s 223.21 secs/100000 ios 00:10:28.659 QEMU NVMe Ctrl (12341 ) core 1: 426.67 IO/s 234.38 secs/100000 ios 00:10:28.659 QEMU NVMe Ctrl (12342 ) core 1: 426.67 IO/s 234.38 secs/100000 ios 00:10:28.659 QEMU NVMe Ctrl (12343 ) core 2: 469.33 IO/s 213.07 secs/100000 ios 00:10:28.659 QEMU NVMe Ctrl (12342 ) core 3: 896.00 IO/s 111.61 secs/100000 ios 00:10:28.659 ======================================================== 00:10:28.659 00:10:28.659 00:10:28.659 real 0m3.440s 00:10:28.659 user 0m9.366s 00:10:28.659 sys 0m0.169s 00:10:28.659 16:04:46 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:28.659 16:04:46 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:28.659 ************************************ 00:10:28.659 END TEST nvme_arbitration 00:10:28.659 ************************************ 00:10:28.659 16:04:46 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:28.659 16:04:46 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:28.659 16:04:46 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:28.659 16:04:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:28.659 ************************************ 00:10:28.659 START TEST nvme_single_aen 00:10:28.659 ************************************ 00:10:28.659 16:04:46 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:28.659 Asynchronous Event Request test 00:10:28.659 Attached to 0000:00:10.0 00:10:28.659 Attached to 0000:00:11.0 00:10:28.659 Attached to 0000:00:13.0 00:10:28.659 Attached to 0000:00:12.0 00:10:28.659 Reset controller to setup AER completions for this process 00:10:28.659 Registering asynchronous event callbacks... 00:10:28.659 Getting orig temperature thresholds of all controllers 00:10:28.659 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:28.659 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:28.659 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:28.659 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:28.659 Setting all controllers temperature threshold low to trigger AER 00:10:28.659 Waiting for all controllers temperature threshold to be set lower 00:10:28.659 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:28.659 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:28.659 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:28.659 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:28.659 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:28.659 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:28.659 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:28.659 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:28.659 Waiting for all controllers to trigger AER and reset threshold 00:10:28.659 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:28.659 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:28.659 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:28.659 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:28.659 Cleaning up... 00:10:28.659 00:10:28.659 real 0m0.300s 00:10:28.659 user 0m0.104s 00:10:28.659 sys 0m0.151s 00:10:28.659 16:04:46 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:28.659 16:04:46 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:28.659 ************************************ 00:10:28.659 END TEST nvme_single_aen 00:10:28.659 ************************************ 00:10:28.659 16:04:47 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:28.659 16:04:47 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:28.659 16:04:47 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:28.659 16:04:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:28.659 ************************************ 00:10:28.659 START TEST nvme_doorbell_aers 00:10:28.659 ************************************ 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:28.659 16:04:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:28.944 [2024-11-04 16:04:47.478479] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:10:38.927 Executing: test_write_invalid_db 00:10:38.927 Waiting for AER completion... 00:10:38.927 Failure: test_write_invalid_db 00:10:38.927 00:10:38.927 Executing: test_invalid_db_write_overflow_sq 00:10:38.927 Waiting for AER completion... 00:10:38.927 Failure: test_invalid_db_write_overflow_sq 00:10:38.927 00:10:38.927 Executing: test_invalid_db_write_overflow_cq 00:10:38.927 Waiting for AER completion... 00:10:38.927 Failure: test_invalid_db_write_overflow_cq 00:10:38.927 00:10:38.927 16:04:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:38.927 16:04:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:38.928 [2024-11-04 16:04:57.529425] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:10:48.903 Executing: test_write_invalid_db 00:10:48.903 Waiting for AER completion... 00:10:48.903 Failure: test_write_invalid_db 00:10:48.903 00:10:48.903 Executing: test_invalid_db_write_overflow_sq 00:10:48.903 Waiting for AER completion... 00:10:48.903 Failure: test_invalid_db_write_overflow_sq 00:10:48.903 00:10:48.903 Executing: test_invalid_db_write_overflow_cq 00:10:48.903 Waiting for AER completion... 00:10:48.903 Failure: test_invalid_db_write_overflow_cq 00:10:48.903 00:10:48.903 16:05:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:48.903 16:05:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:48.903 [2024-11-04 16:05:07.546074] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:10:58.880 Executing: test_write_invalid_db 00:10:58.880 Waiting for AER completion... 00:10:58.880 Failure: test_write_invalid_db 00:10:58.880 00:10:58.880 Executing: test_invalid_db_write_overflow_sq 00:10:58.880 Waiting for AER completion... 00:10:58.880 Failure: test_invalid_db_write_overflow_sq 00:10:58.880 00:10:58.880 Executing: test_invalid_db_write_overflow_cq 00:10:58.880 Waiting for AER completion... 00:10:58.880 Failure: test_invalid_db_write_overflow_cq 00:10:58.880 00:10:58.880 16:05:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:58.880 16:05:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:59.139 [2024-11-04 16:05:17.633415] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:11:09.126 Executing: test_write_invalid_db 00:11:09.126 Waiting for AER completion... 00:11:09.126 Failure: test_write_invalid_db 00:11:09.126 00:11:09.126 Executing: test_invalid_db_write_overflow_sq 00:11:09.126 Waiting for AER completion... 00:11:09.126 Failure: test_invalid_db_write_overflow_sq 00:11:09.126 00:11:09.126 Executing: test_invalid_db_write_overflow_cq 00:11:09.126 Waiting for AER completion... 00:11:09.126 Failure: test_invalid_db_write_overflow_cq 00:11:09.126 00:11:09.126 00:11:09.126 real 0m40.313s 00:11:09.126 user 0m28.404s 00:11:09.126 sys 0m11.531s 00:11:09.126 16:05:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.126 16:05:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:11:09.126 ************************************ 00:11:09.126 END TEST nvme_doorbell_aers 00:11:09.126 ************************************ 00:11:09.126 16:05:27 nvme -- nvme/nvme.sh@97 -- # uname 00:11:09.126 16:05:27 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:09.126 16:05:27 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:09.126 16:05:27 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:11:09.126 16:05:27 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.126 16:05:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:09.126 ************************************ 00:11:09.126 START TEST nvme_multi_aen 00:11:09.126 ************************************ 00:11:09.126 16:05:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:09.126 [2024-11-04 16:05:27.709943] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:11:09.126 [2024-11-04 16:05:27.710048] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:11:09.126 [2024-11-04 16:05:27.710071] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:11:09.126 [2024-11-04 16:05:27.711823] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:11:09.126 [2024-11-04 16:05:27.711860] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:11:09.126 [2024-11-04 16:05:27.711874] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:11:09.126 [2024-11-04 16:05:27.713288] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:11:09.126 [2024-11-04 16:05:27.713450] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:11:09.126 [2024-11-04 16:05:27.713469] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:11:09.126 [2024-11-04 16:05:27.714776] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:11:09.126 [2024-11-04 16:05:27.714810] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:11:09.126 [2024-11-04 16:05:27.714823] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64597) is not found. Dropping the request. 00:11:09.126 Child process pid: 65114 00:11:09.385 [Child] Asynchronous Event Request test 00:11:09.385 [Child] Attached to 0000:00:10.0 00:11:09.385 [Child] Attached to 0000:00:11.0 00:11:09.385 [Child] Attached to 0000:00:13.0 00:11:09.385 [Child] Attached to 0000:00:12.0 00:11:09.385 [Child] Registering asynchronous event callbacks... 00:11:09.385 [Child] Getting orig temperature thresholds of all controllers 00:11:09.385 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:09.385 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:09.385 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:09.385 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:09.385 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:09.385 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:09.385 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:09.385 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:09.385 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:09.385 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.385 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.385 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.385 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.385 [Child] Cleaning up... 00:11:09.644 Asynchronous Event Request test 00:11:09.644 Attached to 0000:00:10.0 00:11:09.644 Attached to 0000:00:11.0 00:11:09.644 Attached to 0000:00:13.0 00:11:09.644 Attached to 0000:00:12.0 00:11:09.644 Reset controller to setup AER completions for this process 00:11:09.644 Registering asynchronous event callbacks... 00:11:09.644 Getting orig temperature thresholds of all controllers 00:11:09.644 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:09.644 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:09.644 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:09.644 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:09.644 Setting all controllers temperature threshold low to trigger AER 00:11:09.644 Waiting for all controllers temperature threshold to be set lower 00:11:09.644 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:09.644 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:09.644 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:09.644 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:09.644 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:09.644 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:09.644 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:09.644 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:09.644 Waiting for all controllers to trigger AER and reset threshold 00:11:09.644 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.644 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.644 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.644 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:09.644 Cleaning up... 00:11:09.644 00:11:09.644 real 0m0.665s 00:11:09.644 user 0m0.241s 00:11:09.644 sys 0m0.319s 00:11:09.644 16:05:28 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.644 16:05:28 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:11:09.644 ************************************ 00:11:09.644 END TEST nvme_multi_aen 00:11:09.644 ************************************ 00:11:09.644 16:05:28 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:09.644 16:05:28 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:09.644 16:05:28 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.644 16:05:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:09.644 ************************************ 00:11:09.644 START TEST nvme_startup 00:11:09.644 ************************************ 00:11:09.644 16:05:28 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:09.902 Initializing NVMe Controllers 00:11:09.902 Attached to 0000:00:10.0 00:11:09.902 Attached to 0000:00:11.0 00:11:09.902 Attached to 0000:00:13.0 00:11:09.902 Attached to 0000:00:12.0 00:11:09.902 Initialization complete. 00:11:09.902 Time used:194797.391 (us). 00:11:09.902 00:11:09.902 real 0m0.291s 00:11:09.902 user 0m0.110s 00:11:09.902 sys 0m0.134s 00:11:09.902 16:05:28 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.902 16:05:28 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:11:09.902 ************************************ 00:11:09.902 END TEST nvme_startup 00:11:09.902 ************************************ 00:11:09.902 16:05:28 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:09.902 16:05:28 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:09.902 16:05:28 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.902 16:05:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:09.902 ************************************ 00:11:09.902 START TEST nvme_multi_secondary 00:11:09.902 ************************************ 00:11:09.902 16:05:28 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:11:09.902 16:05:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65170 00:11:09.902 16:05:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:09.902 16:05:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65171 00:11:09.902 16:05:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:09.902 16:05:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:14.094 Initializing NVMe Controllers 00:11:14.094 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:14.094 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:14.094 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:14.094 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:14.094 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:14.094 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:14.094 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:14.094 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:14.094 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:14.094 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:14.094 Initialization complete. Launching workers. 00:11:14.094 ======================================================== 00:11:14.094 Latency(us) 00:11:14.094 Device Information : IOPS MiB/s Average min max 00:11:14.094 PCIE (0000:00:10.0) NSID 1 from core 1: 4184.93 16.35 3820.65 1774.41 10908.71 00:11:14.094 PCIE (0000:00:11.0) NSID 1 from core 1: 4184.93 16.35 3822.80 1542.02 10071.40 00:11:14.094 PCIE (0000:00:13.0) NSID 1 from core 1: 4184.93 16.35 3822.91 1700.93 10313.13 00:11:14.094 PCIE (0000:00:12.0) NSID 1 from core 1: 4184.93 16.35 3823.16 1632.79 11343.43 00:11:14.094 PCIE (0000:00:12.0) NSID 2 from core 1: 4184.93 16.35 3823.83 1791.85 11184.58 00:11:14.094 PCIE (0000:00:12.0) NSID 3 from core 1: 4184.93 16.35 3824.66 1826.17 10670.76 00:11:14.094 ======================================================== 00:11:14.094 Total : 25109.60 98.08 3823.00 1542.02 11343.43 00:11:14.094 00:11:14.094 Initializing NVMe Controllers 00:11:14.094 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:14.094 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:14.094 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:14.094 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:14.094 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:14.094 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:14.094 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:14.094 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:14.094 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:14.094 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:14.094 Initialization complete. Launching workers. 00:11:14.094 ======================================================== 00:11:14.094 Latency(us) 00:11:14.094 Device Information : IOPS MiB/s Average min max 00:11:14.094 PCIE (0000:00:10.0) NSID 1 from core 2: 2804.15 10.95 5704.54 1346.01 20162.31 00:11:14.094 PCIE (0000:00:11.0) NSID 1 from core 2: 2804.48 10.96 5702.19 1299.29 20062.13 00:11:14.094 PCIE (0000:00:13.0) NSID 1 from core 2: 2804.48 10.96 5696.89 1208.30 25371.58 00:11:14.094 PCIE (0000:00:12.0) NSID 1 from core 2: 2804.48 10.96 5696.78 1142.27 19561.90 00:11:14.094 PCIE (0000:00:12.0) NSID 2 from core 2: 2804.48 10.96 5696.61 884.55 15901.46 00:11:14.094 PCIE (0000:00:12.0) NSID 3 from core 2: 2804.48 10.96 5696.48 849.63 19011.02 00:11:14.094 ======================================================== 00:11:14.094 Total : 16826.56 65.73 5698.91 849.63 25371.58 00:11:14.094 00:11:14.094 16:05:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65170 00:11:15.472 Initializing NVMe Controllers 00:11:15.472 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:15.472 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:15.472 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:15.472 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:15.472 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:15.472 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:15.472 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:15.472 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:15.472 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:15.472 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:15.472 Initialization complete. Launching workers. 00:11:15.472 ======================================================== 00:11:15.472 Latency(us) 00:11:15.472 Device Information : IOPS MiB/s Average min max 00:11:15.472 PCIE (0000:00:10.0) NSID 1 from core 0: 8393.18 32.79 1904.86 921.52 7511.98 00:11:15.472 PCIE (0000:00:11.0) NSID 1 from core 0: 8393.18 32.79 1905.90 923.10 7453.18 00:11:15.472 PCIE (0000:00:13.0) NSID 1 from core 0: 8393.18 32.79 1905.87 907.43 7278.33 00:11:15.472 PCIE (0000:00:12.0) NSID 1 from core 0: 8393.18 32.79 1905.84 833.81 7621.40 00:11:15.472 PCIE (0000:00:12.0) NSID 2 from core 0: 8393.18 32.79 1905.81 777.50 7707.19 00:11:15.472 PCIE (0000:00:12.0) NSID 3 from core 0: 8396.38 32.80 1905.07 738.66 7324.43 00:11:15.472 ======================================================== 00:11:15.472 Total : 50362.30 196.73 1905.56 738.66 7707.19 00:11:15.472 00:11:15.472 16:05:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65171 00:11:15.472 16:05:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65240 00:11:15.472 16:05:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:15.472 16:05:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:15.472 16:05:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65241 00:11:15.472 16:05:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:18.761 Initializing NVMe Controllers 00:11:18.761 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:18.761 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:18.761 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:18.761 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:18.761 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:18.761 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:18.761 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:18.761 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:18.761 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:18.761 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:18.761 Initialization complete. Launching workers. 00:11:18.761 ======================================================== 00:11:18.761 Latency(us) 00:11:18.761 Device Information : IOPS MiB/s Average min max 00:11:18.761 PCIE (0000:00:10.0) NSID 1 from core 1: 4486.80 17.53 3563.56 1184.31 10116.97 00:11:18.761 PCIE (0000:00:11.0) NSID 1 from core 1: 4486.80 17.53 3565.64 1169.82 10403.16 00:11:18.761 PCIE (0000:00:13.0) NSID 1 from core 1: 4486.80 17.53 3566.02 1196.18 10939.09 00:11:18.761 PCIE (0000:00:12.0) NSID 1 from core 1: 4486.80 17.53 3566.48 1212.29 13065.70 00:11:18.761 PCIE (0000:00:12.0) NSID 2 from core 1: 4486.80 17.53 3566.54 1231.11 13306.81 00:11:18.761 PCIE (0000:00:12.0) NSID 3 from core 1: 4486.80 17.53 3566.58 1207.59 11172.78 00:11:18.761 ======================================================== 00:11:18.761 Total : 26920.80 105.16 3565.80 1169.82 13306.81 00:11:18.761 00:11:18.761 Initializing NVMe Controllers 00:11:18.761 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:18.761 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:18.761 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:18.761 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:18.761 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:18.761 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:18.761 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:18.761 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:18.761 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:18.761 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:18.761 Initialization complete. Launching workers. 00:11:18.761 ======================================================== 00:11:18.761 Latency(us) 00:11:18.761 Device Information : IOPS MiB/s Average min max 00:11:18.761 PCIE (0000:00:10.0) NSID 1 from core 0: 4359.06 17.03 3667.90 1064.26 7435.97 00:11:18.761 PCIE (0000:00:11.0) NSID 1 from core 0: 4359.06 17.03 3669.74 1091.42 8521.11 00:11:18.761 PCIE (0000:00:13.0) NSID 1 from core 0: 4359.06 17.03 3669.69 1062.12 8910.05 00:11:18.761 PCIE (0000:00:12.0) NSID 1 from core 0: 4359.06 17.03 3669.69 1081.15 8817.50 00:11:18.761 PCIE (0000:00:12.0) NSID 2 from core 0: 4359.06 17.03 3669.71 1078.78 8598.62 00:11:18.761 PCIE (0000:00:12.0) NSID 3 from core 0: 4359.06 17.03 3669.67 1072.22 7690.89 00:11:18.761 ======================================================== 00:11:18.761 Total : 26154.35 102.17 3669.40 1062.12 8910.05 00:11:18.761 00:11:21.294 Initializing NVMe Controllers 00:11:21.294 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:21.294 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:21.294 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:21.294 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:21.294 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:21.294 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:21.294 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:21.294 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:21.294 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:21.294 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:21.294 Initialization complete. Launching workers. 00:11:21.294 ======================================================== 00:11:21.294 Latency(us) 00:11:21.294 Device Information : IOPS MiB/s Average min max 00:11:21.294 PCIE (0000:00:10.0) NSID 1 from core 2: 3343.30 13.06 4784.65 1035.07 14232.39 00:11:21.294 PCIE (0000:00:11.0) NSID 1 from core 2: 3343.30 13.06 4785.48 1035.38 13165.52 00:11:21.294 PCIE (0000:00:13.0) NSID 1 from core 2: 3343.30 13.06 4785.16 1126.19 13096.41 00:11:21.294 PCIE (0000:00:12.0) NSID 1 from core 2: 3343.30 13.06 4785.33 1113.83 12802.05 00:11:21.294 PCIE (0000:00:12.0) NSID 2 from core 2: 3343.30 13.06 4785.02 1139.35 13588.14 00:11:21.294 PCIE (0000:00:12.0) NSID 3 from core 2: 3343.30 13.06 4784.95 1116.85 13721.26 00:11:21.294 ======================================================== 00:11:21.294 Total : 20059.77 78.36 4785.10 1035.07 14232.39 00:11:21.294 00:11:21.294 ************************************ 00:11:21.294 END TEST nvme_multi_secondary 00:11:21.294 ************************************ 00:11:21.294 16:05:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65240 00:11:21.294 16:05:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65241 00:11:21.294 00:11:21.294 real 0m10.934s 00:11:21.294 user 0m18.567s 00:11:21.294 sys 0m1.000s 00:11:21.294 16:05:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:21.294 16:05:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:21.294 16:05:39 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:21.294 16:05:39 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:21.294 16:05:39 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/64178 ]] 00:11:21.294 16:05:39 nvme -- common/autotest_common.sh@1092 -- # kill 64178 00:11:21.294 16:05:39 nvme -- common/autotest_common.sh@1093 -- # wait 64178 00:11:21.294 [2024-11-04 16:05:39.562996] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.563139] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.563220] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.563273] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.569497] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.569591] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.569631] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.569672] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.575457] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.575522] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.575549] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.575577] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.579628] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.579697] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.579724] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.579765] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65113) is not found. Dropping the request. 00:11:21.294 [2024-11-04 16:05:39.728906] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:11:21.294 16:05:39 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:11:21.294 16:05:39 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:11:21.294 16:05:39 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:21.294 16:05:39 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:21.294 16:05:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:21.294 16:05:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:21.294 ************************************ 00:11:21.294 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:21.294 ************************************ 00:11:21.294 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:21.294 * Looking for test storage... 00:11:21.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:21.294 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:21.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.295 --rc genhtml_branch_coverage=1 00:11:21.295 --rc genhtml_function_coverage=1 00:11:21.295 --rc genhtml_legend=1 00:11:21.295 --rc geninfo_all_blocks=1 00:11:21.295 --rc geninfo_unexecuted_blocks=1 00:11:21.295 00:11:21.295 ' 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:21.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.295 --rc genhtml_branch_coverage=1 00:11:21.295 --rc genhtml_function_coverage=1 00:11:21.295 --rc genhtml_legend=1 00:11:21.295 --rc geninfo_all_blocks=1 00:11:21.295 --rc geninfo_unexecuted_blocks=1 00:11:21.295 00:11:21.295 ' 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:21.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.295 --rc genhtml_branch_coverage=1 00:11:21.295 --rc genhtml_function_coverage=1 00:11:21.295 --rc genhtml_legend=1 00:11:21.295 --rc geninfo_all_blocks=1 00:11:21.295 --rc geninfo_unexecuted_blocks=1 00:11:21.295 00:11:21.295 ' 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:21.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.295 --rc genhtml_branch_coverage=1 00:11:21.295 --rc genhtml_function_coverage=1 00:11:21.295 --rc genhtml_legend=1 00:11:21.295 --rc geninfo_all_blocks=1 00:11:21.295 --rc geninfo_unexecuted_blocks=1 00:11:21.295 00:11:21.295 ' 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:21.295 16:05:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65408 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65408 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 65408 ']' 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:21.554 16:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:21.554 [2024-11-04 16:05:40.216716] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:11:21.554 [2024-11-04 16:05:40.216870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65408 ] 00:11:21.813 [2024-11-04 16:05:40.416142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.072 [2024-11-04 16:05:40.544024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.072 [2024-11-04 16:05:40.544115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.072 [2024-11-04 16:05:40.544182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.072 [2024-11-04 16:05:40.544190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:23.008 nvme0n1 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_fkk8r.txt 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:23.008 true 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730736341 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65431 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:23.008 16:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:24.914 [2024-11-04 16:05:43.551899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:24.914 [2024-11-04 16:05:43.552297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:24.914 [2024-11-04 16:05:43.552335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:24.914 [2024-11-04 16:05:43.552354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.914 [2024-11-04 16:05:43.554733] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:24.914 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65431 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65431 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65431 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:24.914 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_fkk8r.txt 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_fkk8r.txt 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65408 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 65408 ']' 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 65408 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65408 00:11:25.173 killing process with pid 65408 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65408' 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 65408 00:11:25.173 16:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 65408 00:11:27.704 16:05:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:27.704 16:05:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:27.704 00:11:27.704 real 0m6.396s 00:11:27.704 user 0m22.216s 00:11:27.704 sys 0m0.818s 00:11:27.704 16:05:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:27.704 ************************************ 00:11:27.704 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:27.704 ************************************ 00:11:27.704 16:05:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:27.704 16:05:46 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:27.704 16:05:46 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:27.704 16:05:46 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:27.704 16:05:46 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:27.704 16:05:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:27.704 ************************************ 00:11:27.704 START TEST nvme_fio 00:11:27.704 ************************************ 00:11:27.704 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:11:27.704 16:05:46 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:27.704 16:05:46 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:27.704 16:05:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:27.704 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:27.704 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:11:27.704 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:27.704 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:27.704 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:27.704 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:27.704 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:27.704 16:05:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:27.704 16:05:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:27.704 16:05:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:27.704 16:05:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:27.704 16:05:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:27.963 16:05:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:27.963 16:05:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:28.221 16:05:46 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:28.221 16:05:46 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:28.221 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:28.221 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:28.221 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:28.221 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:28.221 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:28.221 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:28.221 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:28.221 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:28.221 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:28.221 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:28.221 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:28.480 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:28.480 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:28.480 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:28.480 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:28.480 16:05:46 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:28.480 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:28.480 fio-3.35 00:11:28.480 Starting 1 thread 00:11:32.667 00:11:32.667 test: (groupid=0, jobs=1): err= 0: pid=65583: Mon Nov 4 16:05:50 2024 00:11:32.667 read: IOPS=21.9k, BW=85.5MiB/s (89.7MB/s)(171MiB/2001msec) 00:11:32.667 slat (nsec): min=4010, max=62664, avg=4530.87, stdev=845.83 00:11:32.667 clat (usec): min=189, max=10800, avg=2917.67, stdev=232.05 00:11:32.667 lat (usec): min=193, max=10862, avg=2922.20, stdev=232.43 00:11:32.667 clat percentiles (usec): 00:11:32.667 | 1.00th=[ 2704], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:11:32.667 | 30.00th=[ 2868], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:11:32.667 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 2999], 95.00th=[ 3064], 00:11:32.667 | 99.00th=[ 3326], 99.50th=[ 3982], 99.90th=[ 5604], 99.95th=[ 8455], 00:11:32.667 | 99.99th=[10552] 00:11:32.667 bw ( KiB/s): min=84688, max=88424, per=99.24%, avg=86896.00, stdev=1958.63, samples=3 00:11:32.667 iops : min=21172, max=22106, avg=21724.00, stdev=489.66, samples=3 00:11:32.667 write: IOPS=21.7k, BW=84.9MiB/s (89.1MB/s)(170MiB/2001msec); 0 zone resets 00:11:32.667 slat (nsec): min=4120, max=63106, avg=4829.60, stdev=944.53 00:11:32.667 clat (usec): min=227, max=10669, avg=2922.62, stdev=242.61 00:11:32.667 lat (usec): min=231, max=10690, avg=2927.45, stdev=242.96 00:11:32.667 clat percentiles (usec): 00:11:32.667 | 1.00th=[ 2704], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:11:32.667 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:11:32.667 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 2999], 95.00th=[ 3064], 00:11:32.667 | 99.00th=[ 3359], 99.50th=[ 4047], 99.90th=[ 6587], 99.95th=[ 8455], 00:11:32.667 | 99.99th=[10159] 00:11:32.667 bw ( KiB/s): min=84552, max=88536, per=100.00%, avg=87096.00, stdev=2209.56, samples=3 00:11:32.667 iops : min=21138, max=22134, avg=21774.00, stdev=552.39, samples=3 00:11:32.667 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:32.667 lat (msec) : 2=0.05%, 4=99.40%, 10=0.49%, 20=0.02% 00:11:32.667 cpu : usr=99.35%, sys=0.15%, ctx=4, majf=0, minf=608 00:11:32.667 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:32.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:32.667 issued rwts: total=43803,43504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.667 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:32.667 00:11:32.667 Run status group 0 (all jobs): 00:11:32.667 READ: bw=85.5MiB/s (89.7MB/s), 85.5MiB/s-85.5MiB/s (89.7MB/s-89.7MB/s), io=171MiB (179MB), run=2001-2001msec 00:11:32.667 WRITE: bw=84.9MiB/s (89.1MB/s), 84.9MiB/s-84.9MiB/s (89.1MB/s-89.1MB/s), io=170MiB (178MB), run=2001-2001msec 00:11:32.667 ----------------------------------------------------- 00:11:32.667 Suppressions used: 00:11:32.667 count bytes template 00:11:32.667 1 32 /usr/src/fio/parse.c 00:11:32.667 1 8 libtcmalloc_minimal.so 00:11:32.667 ----------------------------------------------------- 00:11:32.667 00:11:32.667 16:05:50 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:32.667 16:05:50 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:32.667 16:05:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:32.667 16:05:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:32.667 16:05:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:32.667 16:05:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:32.926 16:05:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:32.926 16:05:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:32.926 16:05:51 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:33.184 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:33.184 fio-3.35 00:11:33.184 Starting 1 thread 00:11:37.374 00:11:37.374 test: (groupid=0, jobs=1): err= 0: pid=65649: Mon Nov 4 16:05:55 2024 00:11:37.374 read: IOPS=21.4k, BW=83.6MiB/s (87.7MB/s)(167MiB/2001msec) 00:11:37.374 slat (nsec): min=3789, max=56228, avg=4542.68, stdev=1084.12 00:11:37.374 clat (usec): min=209, max=10601, avg=2984.09, stdev=417.07 00:11:37.374 lat (usec): min=214, max=10646, avg=2988.64, stdev=417.54 00:11:37.374 clat percentiles (usec): 00:11:37.374 | 1.00th=[ 2409], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2835], 00:11:37.374 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:11:37.374 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3163], 95.00th=[ 3261], 00:11:37.374 | 99.00th=[ 4817], 99.50th=[ 5473], 99.90th=[ 8455], 99.95th=[ 8455], 00:11:37.375 | 99.99th=[10290] 00:11:37.375 bw ( KiB/s): min=80912, max=89904, per=99.49%, avg=85181.33, stdev=4513.11, samples=3 00:11:37.375 iops : min=20228, max=22476, avg=21295.33, stdev=1128.28, samples=3 00:11:37.375 write: IOPS=21.2k, BW=83.0MiB/s (87.0MB/s)(166MiB/2001msec); 0 zone resets 00:11:37.375 slat (nsec): min=3876, max=47148, avg=4850.79, stdev=1124.37 00:11:37.375 clat (usec): min=242, max=10485, avg=2993.22, stdev=417.31 00:11:37.375 lat (usec): min=246, max=10503, avg=2998.07, stdev=417.80 00:11:37.375 clat percentiles (usec): 00:11:37.375 | 1.00th=[ 2474], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2835], 00:11:37.375 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:11:37.375 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3163], 95.00th=[ 3261], 00:11:37.375 | 99.00th=[ 4817], 99.50th=[ 5473], 99.90th=[ 8356], 99.95th=[ 8717], 00:11:37.375 | 99.99th=[10159] 00:11:37.375 bw ( KiB/s): min=80792, max=90320, per=100.00%, avg=85368.00, stdev=4775.12, samples=3 00:11:37.375 iops : min=20198, max=22580, avg=21342.00, stdev=1193.78, samples=3 00:11:37.375 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:37.375 lat (msec) : 2=0.43%, 4=97.82%, 10=1.70%, 20=0.01% 00:11:37.375 cpu : usr=99.45%, sys=0.05%, ctx=2, majf=0, minf=607 00:11:37.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:37.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:37.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:37.375 issued rwts: total=42832,42501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:37.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:37.375 00:11:37.375 Run status group 0 (all jobs): 00:11:37.375 READ: bw=83.6MiB/s (87.7MB/s), 83.6MiB/s-83.6MiB/s (87.7MB/s-87.7MB/s), io=167MiB (175MB), run=2001-2001msec 00:11:37.375 WRITE: bw=83.0MiB/s (87.0MB/s), 83.0MiB/s-83.0MiB/s (87.0MB/s-87.0MB/s), io=166MiB (174MB), run=2001-2001msec 00:11:37.375 ----------------------------------------------------- 00:11:37.375 Suppressions used: 00:11:37.375 count bytes template 00:11:37.375 1 32 /usr/src/fio/parse.c 00:11:37.375 1 8 libtcmalloc_minimal.so 00:11:37.375 ----------------------------------------------------- 00:11:37.375 00:11:37.375 16:05:55 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:37.375 16:05:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:37.375 16:05:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:37.375 16:05:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:37.375 16:05:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:37.375 16:05:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:37.633 16:05:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:37.633 16:05:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:37.633 16:05:56 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:37.633 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:37.633 fio-3.35 00:11:37.633 Starting 1 thread 00:11:41.819 00:11:41.819 test: (groupid=0, jobs=1): err= 0: pid=65710: Mon Nov 4 16:05:59 2024 00:11:41.819 read: IOPS=19.3k, BW=75.2MiB/s (78.9MB/s)(151MiB/2001msec) 00:11:41.819 slat (nsec): min=4213, max=79846, avg=5362.09, stdev=1773.98 00:11:41.819 clat (usec): min=251, max=12548, avg=3305.75, stdev=325.57 00:11:41.819 lat (usec): min=256, max=12628, avg=3311.12, stdev=326.00 00:11:41.819 clat percentiles (usec): 00:11:41.819 | 1.00th=[ 2966], 5.00th=[ 3097], 10.00th=[ 3130], 20.00th=[ 3195], 00:11:41.819 | 30.00th=[ 3228], 40.00th=[ 3261], 50.00th=[ 3294], 60.00th=[ 3326], 00:11:41.819 | 70.00th=[ 3359], 80.00th=[ 3392], 90.00th=[ 3458], 95.00th=[ 3523], 00:11:41.819 | 99.00th=[ 4178], 99.50th=[ 4883], 99.90th=[ 7373], 99.95th=[10028], 00:11:41.819 | 99.99th=[12256] 00:11:41.819 bw ( KiB/s): min=75120, max=77848, per=99.55%, avg=76690.67, stdev=1410.19, samples=3 00:11:41.819 iops : min=18780, max=19462, avg=19172.67, stdev=352.55, samples=3 00:11:41.819 write: IOPS=19.2k, BW=75.1MiB/s (78.8MB/s)(150MiB/2001msec); 0 zone resets 00:11:41.819 slat (nsec): min=4426, max=51987, avg=5768.96, stdev=1808.95 00:11:41.819 clat (usec): min=242, max=12350, avg=3314.89, stdev=332.98 00:11:41.819 lat (usec): min=248, max=12374, avg=3320.66, stdev=333.36 00:11:41.819 clat percentiles (usec): 00:11:41.819 | 1.00th=[ 2966], 5.00th=[ 3097], 10.00th=[ 3130], 20.00th=[ 3195], 00:11:41.819 | 30.00th=[ 3228], 40.00th=[ 3261], 50.00th=[ 3294], 60.00th=[ 3326], 00:11:41.819 | 70.00th=[ 3359], 80.00th=[ 3392], 90.00th=[ 3458], 95.00th=[ 3523], 00:11:41.819 | 99.00th=[ 4178], 99.50th=[ 4948], 99.90th=[ 8094], 99.95th=[10290], 00:11:41.819 | 99.99th=[11994] 00:11:41.819 bw ( KiB/s): min=75016, max=78088, per=99.86%, avg=76816.00, stdev=1602.62, samples=3 00:11:41.819 iops : min=18754, max=19522, avg=19204.00, stdev=400.65, samples=3 00:11:41.819 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:41.820 lat (msec) : 2=0.15%, 4=98.54%, 10=1.21%, 20=0.06% 00:11:41.820 cpu : usr=99.20%, sys=0.20%, ctx=2, majf=0, minf=607 00:11:41.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:41.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:41.820 issued rwts: total=38536,38481,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:41.820 00:11:41.820 Run status group 0 (all jobs): 00:11:41.820 READ: bw=75.2MiB/s (78.9MB/s), 75.2MiB/s-75.2MiB/s (78.9MB/s-78.9MB/s), io=151MiB (158MB), run=2001-2001msec 00:11:41.820 WRITE: bw=75.1MiB/s (78.8MB/s), 75.1MiB/s-75.1MiB/s (78.8MB/s-78.8MB/s), io=150MiB (158MB), run=2001-2001msec 00:11:41.820 ----------------------------------------------------- 00:11:41.820 Suppressions used: 00:11:41.820 count bytes template 00:11:41.820 1 32 /usr/src/fio/parse.c 00:11:41.820 1 8 libtcmalloc_minimal.so 00:11:41.820 ----------------------------------------------------- 00:11:41.820 00:11:41.820 16:06:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:41.820 16:06:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:41.820 16:06:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:41.820 16:06:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:41.820 16:06:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:41.820 16:06:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:42.077 16:06:00 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:42.077 16:06:00 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:42.077 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:42.077 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:42.077 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:42.077 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:42.077 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:42.077 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:42.077 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:42.078 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:42.078 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:42.078 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:42.078 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:42.078 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:42.078 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:42.078 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:42.078 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:42.078 16:06:00 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:42.336 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:42.336 fio-3.35 00:11:42.336 Starting 1 thread 00:11:46.520 00:11:46.520 test: (groupid=0, jobs=1): err= 0: pid=65772: Mon Nov 4 16:06:04 2024 00:11:46.520 read: IOPS=18.0k, BW=70.1MiB/s (73.5MB/s)(141MiB/2009msec) 00:11:46.520 slat (usec): min=3, max=158, avg= 4.58, stdev= 1.62 00:11:46.520 clat (usec): min=708, max=10255, avg=2755.35, stdev=605.77 00:11:46.520 lat (usec): min=712, max=10268, avg=2759.94, stdev=606.13 00:11:46.520 clat percentiles (usec): 00:11:46.520 | 1.00th=[ 1467], 5.00th=[ 1811], 10.00th=[ 2057], 20.00th=[ 2474], 00:11:46.520 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:11:46.520 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 3032], 95.00th=[ 3458], 00:11:46.520 | 99.00th=[ 5014], 99.50th=[ 5669], 99.90th=[ 9110], 99.95th=[ 9372], 00:11:46.520 | 99.99th=[10159] 00:11:46.520 bw ( KiB/s): min=51248, max=90432, per=100.00%, avg=72072.00, stdev=19484.29, samples=4 00:11:46.520 iops : min=12812, max=22608, avg=18018.00, stdev=4871.07, samples=4 00:11:46.520 write: IOPS=18.0k, BW=70.2MiB/s (73.6MB/s)(141MiB/2009msec); 0 zone resets 00:11:46.520 slat (usec): min=3, max=102, avg= 4.87, stdev= 1.51 00:11:46.520 clat (usec): min=1172, max=22243, avg=4337.09, stdev=3641.66 00:11:46.520 lat (usec): min=1176, max=22247, avg=4341.95, stdev=3641.71 00:11:46.520 clat percentiles (usec): 00:11:46.520 | 1.00th=[ 1598], 5.00th=[ 2057], 10.00th=[ 2474], 20.00th=[ 2737], 00:11:46.520 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:46.520 | 70.00th=[ 2933], 80.00th=[ 3490], 90.00th=[11207], 95.00th=[12518], 00:11:46.520 | 99.00th=[17433], 99.50th=[18482], 99.90th=[20055], 99.95th=[21103], 00:11:46.520 | 99.99th=[22152] 00:11:46.520 bw ( KiB/s): min=51992, max=90248, per=100.00%, avg=72028.00, stdev=19195.61, samples=4 00:11:46.520 iops : min=12998, max=22562, avg=18007.00, stdev=4798.90, samples=4 00:11:46.520 lat (usec) : 750=0.01% 00:11:46.520 lat (msec) : 2=6.53%, 4=82.87%, 10=3.36%, 20=7.18%, 50=0.06% 00:11:46.520 cpu : usr=99.15%, sys=0.20%, ctx=21, majf=0, minf=606 00:11:46.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:46.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.520 issued rwts: total=36064,36113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.520 00:11:46.520 Run status group 0 (all jobs): 00:11:46.520 READ: bw=70.1MiB/s (73.5MB/s), 70.1MiB/s-70.1MiB/s (73.5MB/s-73.5MB/s), io=141MiB (148MB), run=2009-2009msec 00:11:46.520 WRITE: bw=70.2MiB/s (73.6MB/s), 70.2MiB/s-70.2MiB/s (73.6MB/s-73.6MB/s), io=141MiB (148MB), run=2009-2009msec 00:11:46.520 ----------------------------------------------------- 00:11:46.520 Suppressions used: 00:11:46.520 count bytes template 00:11:46.520 1 32 /usr/src/fio/parse.c 00:11:46.520 1 8 libtcmalloc_minimal.so 00:11:46.520 ----------------------------------------------------- 00:11:46.520 00:11:46.520 16:06:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:46.520 16:06:04 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:46.520 00:11:46.520 real 0m18.478s 00:11:46.520 user 0m14.666s 00:11:46.520 sys 0m2.855s 00:11:46.520 16:06:04 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:46.520 ************************************ 00:11:46.520 END TEST nvme_fio 00:11:46.520 16:06:04 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:46.520 ************************************ 00:11:46.520 00:11:46.520 real 1m33.734s 00:11:46.520 user 3m43.184s 00:11:46.520 sys 0m22.129s 00:11:46.520 16:06:04 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:46.520 ************************************ 00:11:46.520 16:06:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:46.520 END TEST nvme 00:11:46.520 ************************************ 00:11:46.520 16:06:04 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:11:46.520 16:06:04 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:46.520 16:06:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:46.520 16:06:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:46.520 16:06:04 -- common/autotest_common.sh@10 -- # set +x 00:11:46.520 ************************************ 00:11:46.520 START TEST nvme_scc 00:11:46.520 ************************************ 00:11:46.520 16:06:04 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:46.520 * Looking for test storage... 00:11:46.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:46.520 16:06:04 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:46.521 16:06:04 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:46.521 16:06:04 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:46.521 16:06:05 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@345 -- # : 1 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@368 -- # return 0 00:11:46.521 16:06:05 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.521 16:06:05 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:46.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.521 --rc genhtml_branch_coverage=1 00:11:46.521 --rc genhtml_function_coverage=1 00:11:46.521 --rc genhtml_legend=1 00:11:46.521 --rc geninfo_all_blocks=1 00:11:46.521 --rc geninfo_unexecuted_blocks=1 00:11:46.521 00:11:46.521 ' 00:11:46.521 16:06:05 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:46.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.521 --rc genhtml_branch_coverage=1 00:11:46.521 --rc genhtml_function_coverage=1 00:11:46.521 --rc genhtml_legend=1 00:11:46.521 --rc geninfo_all_blocks=1 00:11:46.521 --rc geninfo_unexecuted_blocks=1 00:11:46.521 00:11:46.521 ' 00:11:46.521 16:06:05 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:46.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.521 --rc genhtml_branch_coverage=1 00:11:46.521 --rc genhtml_function_coverage=1 00:11:46.521 --rc genhtml_legend=1 00:11:46.521 --rc geninfo_all_blocks=1 00:11:46.521 --rc geninfo_unexecuted_blocks=1 00:11:46.521 00:11:46.521 ' 00:11:46.521 16:06:05 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:46.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.521 --rc genhtml_branch_coverage=1 00:11:46.521 --rc genhtml_function_coverage=1 00:11:46.521 --rc genhtml_legend=1 00:11:46.521 --rc geninfo_all_blocks=1 00:11:46.521 --rc geninfo_unexecuted_blocks=1 00:11:46.521 00:11:46.521 ' 00:11:46.521 16:06:05 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:46.521 16:06:05 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:46.521 16:06:05 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:46.521 16:06:05 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:46.521 16:06:05 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.521 16:06:05 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.521 16:06:05 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.521 16:06:05 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.521 16:06:05 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.521 16:06:05 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:46.521 16:06:05 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.521 16:06:05 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:46.521 16:06:05 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:46.521 16:06:05 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:46.521 16:06:05 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:46.521 16:06:05 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:46.521 16:06:05 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:46.521 16:06:05 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:46.521 16:06:05 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:46.521 16:06:05 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:46.521 16:06:05 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.521 16:06:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:46.521 16:06:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:46.521 16:06:05 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:46.521 16:06:05 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:46.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:47.038 Waiting for block devices as requested 00:11:47.297 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.297 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.297 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.556 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.847 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:52.847 16:06:11 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:52.847 16:06:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:52.847 16:06:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:52.847 16:06:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:52.847 16:06:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.847 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.848 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.849 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.850 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:52.851 16:06:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:52.851 16:06:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:52.851 16:06:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:52.852 16:06:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:52.852 16:06:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:52.852 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:52.853 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.854 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:52.855 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:52.856 16:06:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:52.856 16:06:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:52.856 16:06:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:52.856 16:06:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:52.856 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:52.857 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:52.858 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.859 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:52.860 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.861 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.862 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:52.863 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:52.864 16:06:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:52.864 16:06:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:52.864 16:06:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:52.864 16:06:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:52.864 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:52.865 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:52.866 16:06:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:52.866 16:06:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:52.867 16:06:11 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:53.126 16:06:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:53.126 16:06:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:53.126 16:06:11 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:53.694 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:54.629 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.629 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.629 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.629 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.629 16:06:13 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:54.629 16:06:13 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:54.629 16:06:13 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:54.629 16:06:13 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:54.629 ************************************ 00:11:54.629 START TEST nvme_simple_copy 00:11:54.629 ************************************ 00:11:54.629 16:06:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:54.888 Initializing NVMe Controllers 00:11:54.888 Attaching to 0000:00:10.0 00:11:54.888 Controller supports SCC. Attached to 0000:00:10.0 00:11:54.888 Namespace ID: 1 size: 6GB 00:11:54.888 Initialization complete. 00:11:54.888 00:11:54.888 Controller QEMU NVMe Ctrl (12340 ) 00:11:54.888 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:54.888 Namespace Block Size:4096 00:11:54.888 Writing LBAs 0 to 63 with Random Data 00:11:54.888 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:54.888 LBAs matching Written Data: 64 00:11:54.888 00:11:54.888 real 0m0.350s 00:11:54.888 user 0m0.148s 00:11:54.888 sys 0m0.100s 00:11:54.888 16:06:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:54.888 16:06:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:54.888 ************************************ 00:11:54.888 END TEST nvme_simple_copy 00:11:54.888 ************************************ 00:11:55.146 00:11:55.146 real 0m8.780s 00:11:55.146 user 0m1.522s 00:11:55.146 sys 0m2.264s 00:11:55.146 16:06:13 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:55.146 ************************************ 00:11:55.146 END TEST nvme_scc 00:11:55.146 16:06:13 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:55.146 ************************************ 00:11:55.146 16:06:13 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:55.146 16:06:13 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:55.146 16:06:13 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:55.146 16:06:13 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:55.146 16:06:13 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:55.146 16:06:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:55.146 16:06:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:55.146 16:06:13 -- common/autotest_common.sh@10 -- # set +x 00:11:55.146 ************************************ 00:11:55.146 START TEST nvme_fdp 00:11:55.146 ************************************ 00:11:55.146 16:06:13 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:11:55.146 * Looking for test storage... 00:11:55.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:55.146 16:06:13 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:55.146 16:06:13 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:55.146 16:06:13 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:11:55.406 16:06:13 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.406 16:06:13 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:55.406 16:06:13 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.406 16:06:13 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:55.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.406 --rc genhtml_branch_coverage=1 00:11:55.406 --rc genhtml_function_coverage=1 00:11:55.406 --rc genhtml_legend=1 00:11:55.406 --rc geninfo_all_blocks=1 00:11:55.406 --rc geninfo_unexecuted_blocks=1 00:11:55.406 00:11:55.406 ' 00:11:55.406 16:06:13 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:55.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.406 --rc genhtml_branch_coverage=1 00:11:55.406 --rc genhtml_function_coverage=1 00:11:55.406 --rc genhtml_legend=1 00:11:55.406 --rc geninfo_all_blocks=1 00:11:55.406 --rc geninfo_unexecuted_blocks=1 00:11:55.406 00:11:55.406 ' 00:11:55.406 16:06:13 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:55.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.406 --rc genhtml_branch_coverage=1 00:11:55.406 --rc genhtml_function_coverage=1 00:11:55.406 --rc genhtml_legend=1 00:11:55.406 --rc geninfo_all_blocks=1 00:11:55.406 --rc geninfo_unexecuted_blocks=1 00:11:55.406 00:11:55.406 ' 00:11:55.406 16:06:13 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:55.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.407 --rc genhtml_branch_coverage=1 00:11:55.407 --rc genhtml_function_coverage=1 00:11:55.407 --rc genhtml_legend=1 00:11:55.407 --rc geninfo_all_blocks=1 00:11:55.407 --rc geninfo_unexecuted_blocks=1 00:11:55.407 00:11:55.407 ' 00:11:55.407 16:06:13 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:55.407 16:06:13 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:55.407 16:06:13 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:55.407 16:06:13 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:55.407 16:06:13 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:55.407 16:06:13 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.407 16:06:13 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.407 16:06:13 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.407 16:06:13 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.407 16:06:13 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.407 16:06:13 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.407 16:06:13 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.407 16:06:13 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:55.407 16:06:13 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.407 16:06:13 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:55.407 16:06:13 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:55.407 16:06:13 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:55.407 16:06:13 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:55.407 16:06:13 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:55.407 16:06:13 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:55.407 16:06:13 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:55.407 16:06:13 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:55.407 16:06:13 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:55.407 16:06:13 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.407 16:06:13 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:55.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:55.975 Waiting for block devices as requested 00:11:56.233 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:56.233 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:56.233 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:56.491 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:01.774 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:01.774 16:06:20 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:01.774 16:06:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:01.774 16:06:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:01.774 16:06:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:01.774 16:06:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.774 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:01.775 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.776 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:01.777 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:01.778 16:06:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:01.778 16:06:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:01.778 16:06:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:01.778 16:06:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.778 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:01.779 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:01.780 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:01.781 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.782 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:01.783 16:06:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:01.783 16:06:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:01.783 16:06:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:01.783 16:06:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.783 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:01.784 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:01.785 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:01.786 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.069 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.070 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:02.071 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.072 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:02.073 16:06:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:02.073 16:06:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:02.073 16:06:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:02.073 16:06:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:02.073 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:02.074 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:02.075 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:02.076 16:06:20 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:12:02.076 16:06:20 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:12:02.077 16:06:20 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:12:02.077 16:06:20 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:02.077 16:06:20 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:02.077 16:06:20 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:02.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:03.575 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.575 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.575 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.575 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:03.575 16:06:22 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:03.575 16:06:22 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:03.575 16:06:22 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:03.575 16:06:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:03.575 ************************************ 00:12:03.575 START TEST nvme_flexible_data_placement 00:12:03.575 ************************************ 00:12:03.575 16:06:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:04.140 Initializing NVMe Controllers 00:12:04.140 Attaching to 0000:00:13.0 00:12:04.140 Controller supports FDP Attached to 0000:00:13.0 00:12:04.140 Namespace ID: 1 Endurance Group ID: 1 00:12:04.140 Initialization complete. 00:12:04.140 00:12:04.140 ================================== 00:12:04.140 == FDP tests for Namespace: #01 == 00:12:04.140 ================================== 00:12:04.140 00:12:04.140 Get Feature: FDP: 00:12:04.140 ================= 00:12:04.140 Enabled: Yes 00:12:04.140 FDP configuration Index: 0 00:12:04.140 00:12:04.140 FDP configurations log page 00:12:04.140 =========================== 00:12:04.140 Number of FDP configurations: 1 00:12:04.140 Version: 0 00:12:04.140 Size: 112 00:12:04.140 FDP Configuration Descriptor: 0 00:12:04.140 Descriptor Size: 96 00:12:04.140 Reclaim Group Identifier format: 2 00:12:04.140 FDP Volatile Write Cache: Not Present 00:12:04.140 FDP Configuration: Valid 00:12:04.140 Vendor Specific Size: 0 00:12:04.140 Number of Reclaim Groups: 2 00:12:04.140 Number of Recalim Unit Handles: 8 00:12:04.140 Max Placement Identifiers: 128 00:12:04.140 Number of Namespaces Suppprted: 256 00:12:04.140 Reclaim unit Nominal Size: 6000000 bytes 00:12:04.140 Estimated Reclaim Unit Time Limit: Not Reported 00:12:04.140 RUH Desc #000: RUH Type: Initially Isolated 00:12:04.140 RUH Desc #001: RUH Type: Initially Isolated 00:12:04.140 RUH Desc #002: RUH Type: Initially Isolated 00:12:04.140 RUH Desc #003: RUH Type: Initially Isolated 00:12:04.140 RUH Desc #004: RUH Type: Initially Isolated 00:12:04.140 RUH Desc #005: RUH Type: Initially Isolated 00:12:04.140 RUH Desc #006: RUH Type: Initially Isolated 00:12:04.140 RUH Desc #007: RUH Type: Initially Isolated 00:12:04.140 00:12:04.140 FDP reclaim unit handle usage log page 00:12:04.140 ====================================== 00:12:04.140 Number of Reclaim Unit Handles: 8 00:12:04.140 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:04.141 RUH Usage Desc #001: RUH Attributes: Unused 00:12:04.141 RUH Usage Desc #002: RUH Attributes: Unused 00:12:04.141 RUH Usage Desc #003: RUH Attributes: Unused 00:12:04.141 RUH Usage Desc #004: RUH Attributes: Unused 00:12:04.141 RUH Usage Desc #005: RUH Attributes: Unused 00:12:04.141 RUH Usage Desc #006: RUH Attributes: Unused 00:12:04.141 RUH Usage Desc #007: RUH Attributes: Unused 00:12:04.141 00:12:04.141 FDP statistics log page 00:12:04.141 ======================= 00:12:04.141 Host bytes with metadata written: 975765504 00:12:04.141 Media bytes with metadata written: 975859712 00:12:04.141 Media bytes erased: 0 00:12:04.141 00:12:04.141 FDP Reclaim unit handle status 00:12:04.141 ============================== 00:12:04.141 Number of RUHS descriptors: 2 00:12:04.141 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001d70 00:12:04.141 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:04.141 00:12:04.141 FDP write on placement id: 0 success 00:12:04.141 00:12:04.141 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:04.141 00:12:04.141 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:04.141 00:12:04.141 Get Feature: FDP Events for Placement handle: #0 00:12:04.141 ======================== 00:12:04.141 Number of FDP Events: 6 00:12:04.141 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:04.141 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:04.141 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:04.141 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:04.141 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:04.141 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:04.141 00:12:04.141 FDP events log page 00:12:04.141 =================== 00:12:04.141 Number of FDP events: 1 00:12:04.141 FDP Event #0: 00:12:04.141 Event Type: RU Not Written to Capacity 00:12:04.141 Placement Identifier: Valid 00:12:04.141 NSID: Valid 00:12:04.141 Location: Valid 00:12:04.141 Placement Identifier: 0 00:12:04.141 Event Timestamp: 8 00:12:04.141 Namespace Identifier: 1 00:12:04.141 Reclaim Group Identifier: 0 00:12:04.141 Reclaim Unit Handle Identifier: 0 00:12:04.141 00:12:04.141 FDP test passed 00:12:04.141 00:12:04.141 real 0m0.295s 00:12:04.141 user 0m0.091s 00:12:04.141 sys 0m0.103s 00:12:04.141 16:06:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:04.141 16:06:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:04.141 ************************************ 00:12:04.141 END TEST nvme_flexible_data_placement 00:12:04.141 ************************************ 00:12:04.141 00:12:04.141 real 0m8.955s 00:12:04.141 user 0m1.519s 00:12:04.141 sys 0m2.465s 00:12:04.141 16:06:22 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:04.141 16:06:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:04.141 ************************************ 00:12:04.141 END TEST nvme_fdp 00:12:04.141 ************************************ 00:12:04.141 16:06:22 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:12:04.141 16:06:22 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:04.141 16:06:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:04.141 16:06:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:04.141 16:06:22 -- common/autotest_common.sh@10 -- # set +x 00:12:04.141 ************************************ 00:12:04.141 START TEST nvme_rpc 00:12:04.141 ************************************ 00:12:04.141 16:06:22 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:04.141 * Looking for test storage... 00:12:04.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:04.141 16:06:22 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:04.141 16:06:22 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:04.141 16:06:22 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:04.399 16:06:22 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.399 16:06:22 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:04.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.400 --rc genhtml_branch_coverage=1 00:12:04.400 --rc genhtml_function_coverage=1 00:12:04.400 --rc genhtml_legend=1 00:12:04.400 --rc geninfo_all_blocks=1 00:12:04.400 --rc geninfo_unexecuted_blocks=1 00:12:04.400 00:12:04.400 ' 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:04.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.400 --rc genhtml_branch_coverage=1 00:12:04.400 --rc genhtml_function_coverage=1 00:12:04.400 --rc genhtml_legend=1 00:12:04.400 --rc geninfo_all_blocks=1 00:12:04.400 --rc geninfo_unexecuted_blocks=1 00:12:04.400 00:12:04.400 ' 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:04.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.400 --rc genhtml_branch_coverage=1 00:12:04.400 --rc genhtml_function_coverage=1 00:12:04.400 --rc genhtml_legend=1 00:12:04.400 --rc geninfo_all_blocks=1 00:12:04.400 --rc geninfo_unexecuted_blocks=1 00:12:04.400 00:12:04.400 ' 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:04.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.400 --rc genhtml_branch_coverage=1 00:12:04.400 --rc genhtml_function_coverage=1 00:12:04.400 --rc genhtml_legend=1 00:12:04.400 --rc geninfo_all_blocks=1 00:12:04.400 --rc geninfo_unexecuted_blocks=1 00:12:04.400 00:12:04.400 ' 00:12:04.400 16:06:22 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.400 16:06:22 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:04.400 16:06:22 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:12:04.400 16:06:23 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:12:04.400 16:06:23 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:04.400 16:06:23 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:12:04.400 16:06:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:04.400 16:06:23 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67154 00:12:04.400 16:06:23 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:04.400 16:06:23 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:04.400 16:06:23 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67154 00:12:04.400 16:06:23 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 67154 ']' 00:12:04.400 16:06:23 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.400 16:06:23 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:04.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.400 16:06:23 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.400 16:06:23 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:04.400 16:06:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.658 [2024-11-04 16:06:23.142469] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:12:04.658 [2024-11-04 16:06:23.142589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67154 ] 00:12:04.658 [2024-11-04 16:06:23.322190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:04.916 [2024-11-04 16:06:23.440097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.916 [2024-11-04 16:06:23.440133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.849 16:06:24 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:05.850 16:06:24 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:05.850 16:06:24 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:06.107 Nvme0n1 00:12:06.107 16:06:24 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:06.107 16:06:24 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:06.107 request: 00:12:06.107 { 00:12:06.107 "bdev_name": "Nvme0n1", 00:12:06.107 "filename": "non_existing_file", 00:12:06.107 "method": "bdev_nvme_apply_firmware", 00:12:06.107 "req_id": 1 00:12:06.107 } 00:12:06.107 Got JSON-RPC error response 00:12:06.107 response: 00:12:06.107 { 00:12:06.107 "code": -32603, 00:12:06.107 "message": "open file failed." 00:12:06.107 } 00:12:06.365 16:06:24 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:06.365 16:06:24 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:06.365 16:06:24 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:06.365 16:06:25 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:06.365 16:06:25 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67154 00:12:06.365 16:06:25 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 67154 ']' 00:12:06.365 16:06:25 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 67154 00:12:06.365 16:06:25 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:12:06.365 16:06:25 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:06.365 16:06:25 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67154 00:12:06.365 16:06:25 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:06.365 killing process with pid 67154 00:12:06.365 16:06:25 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:06.365 16:06:25 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67154' 00:12:06.365 16:06:25 nvme_rpc -- common/autotest_common.sh@971 -- # kill 67154 00:12:06.365 16:06:25 nvme_rpc -- common/autotest_common.sh@976 -- # wait 67154 00:12:08.940 00:12:08.940 real 0m4.655s 00:12:08.940 user 0m8.504s 00:12:08.940 sys 0m0.780s 00:12:08.940 16:06:27 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:08.941 16:06:27 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.941 ************************************ 00:12:08.941 END TEST nvme_rpc 00:12:08.941 ************************************ 00:12:08.941 16:06:27 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:08.941 16:06:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:08.941 16:06:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:08.941 16:06:27 -- common/autotest_common.sh@10 -- # set +x 00:12:08.941 ************************************ 00:12:08.941 START TEST nvme_rpc_timeouts 00:12:08.941 ************************************ 00:12:08.941 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:08.941 * Looking for test storage... 00:12:08.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:08.941 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:08.941 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:08.941 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:12:08.941 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.941 16:06:27 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:12:08.941 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.941 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:08.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.941 --rc genhtml_branch_coverage=1 00:12:08.941 --rc genhtml_function_coverage=1 00:12:08.941 --rc genhtml_legend=1 00:12:08.941 --rc geninfo_all_blocks=1 00:12:08.941 --rc geninfo_unexecuted_blocks=1 00:12:08.941 00:12:08.941 ' 00:12:08.941 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:08.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.941 --rc genhtml_branch_coverage=1 00:12:08.941 --rc genhtml_function_coverage=1 00:12:08.941 --rc genhtml_legend=1 00:12:08.941 --rc geninfo_all_blocks=1 00:12:08.941 --rc geninfo_unexecuted_blocks=1 00:12:08.941 00:12:08.941 ' 00:12:08.941 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:08.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.941 --rc genhtml_branch_coverage=1 00:12:08.941 --rc genhtml_function_coverage=1 00:12:08.941 --rc genhtml_legend=1 00:12:08.941 --rc geninfo_all_blocks=1 00:12:08.941 --rc geninfo_unexecuted_blocks=1 00:12:08.941 00:12:08.941 ' 00:12:08.941 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:08.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.941 --rc genhtml_branch_coverage=1 00:12:08.941 --rc genhtml_function_coverage=1 00:12:08.941 --rc genhtml_legend=1 00:12:08.941 --rc geninfo_all_blocks=1 00:12:08.941 --rc geninfo_unexecuted_blocks=1 00:12:08.941 00:12:08.941 ' 00:12:08.941 16:06:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:08.941 16:06:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67233 00:12:08.941 16:06:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67233 00:12:08.941 16:06:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67266 00:12:08.941 16:06:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:08.941 16:06:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:08.941 16:06:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67266 00:12:09.220 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 67266 ']' 00:12:09.220 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.220 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:09.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.220 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.220 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:09.220 16:06:27 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:09.220 [2024-11-04 16:06:27.767574] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:12:09.220 [2024-11-04 16:06:27.767718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67266 ] 00:12:09.477 [2024-11-04 16:06:27.947082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:09.477 [2024-11-04 16:06:28.063506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.477 [2024-11-04 16:06:28.063553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.412 16:06:28 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:10.412 Checking default timeout settings: 00:12:10.412 16:06:28 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:12:10.412 16:06:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:10.412 16:06:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:10.669 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:10.669 Making settings changes with rpc: 00:12:10.669 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:10.928 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:10.928 Check default vs. modified settings: 00:12:10.928 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67233 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67233 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:11.186 Setting action_on_timeout is changed as expected. 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67233 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67233 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:11.186 Setting timeout_us is changed as expected. 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67233 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67233 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:11.186 Setting timeout_admin_us is changed as expected. 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67233 /tmp/settings_modified_67233 00:12:11.186 16:06:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67266 00:12:11.186 16:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 67266 ']' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 67266 00:12:11.186 16:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:12:11.186 16:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:11.186 16:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67266 00:12:11.445 16:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:11.445 killing process with pid 67266 00:12:11.445 16:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:11.445 16:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67266' 00:12:11.445 16:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 67266 00:12:11.445 16:06:29 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 67266 00:12:13.978 RPC TIMEOUT SETTING TEST PASSED. 00:12:13.978 16:06:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:13.978 00:12:13.978 real 0m4.896s 00:12:13.978 user 0m9.191s 00:12:13.978 sys 0m0.787s 00:12:13.978 16:06:32 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:13.978 16:06:32 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:13.978 ************************************ 00:12:13.978 END TEST nvme_rpc_timeouts 00:12:13.978 ************************************ 00:12:13.978 16:06:32 -- spdk/autotest.sh@239 -- # uname -s 00:12:13.978 16:06:32 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:12:13.978 16:06:32 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:13.978 16:06:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:13.978 16:06:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:13.978 16:06:32 -- common/autotest_common.sh@10 -- # set +x 00:12:13.978 ************************************ 00:12:13.978 START TEST sw_hotplug 00:12:13.978 ************************************ 00:12:13.978 16:06:32 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:13.978 * Looking for test storage... 00:12:13.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:13.978 16:06:32 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:13.978 16:06:32 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:12:13.978 16:06:32 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:13.978 16:06:32 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.978 16:06:32 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:12:13.978 16:06:32 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.978 16:06:32 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:13.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.978 --rc genhtml_branch_coverage=1 00:12:13.978 --rc genhtml_function_coverage=1 00:12:13.978 --rc genhtml_legend=1 00:12:13.978 --rc geninfo_all_blocks=1 00:12:13.978 --rc geninfo_unexecuted_blocks=1 00:12:13.978 00:12:13.978 ' 00:12:13.978 16:06:32 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:13.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.978 --rc genhtml_branch_coverage=1 00:12:13.978 --rc genhtml_function_coverage=1 00:12:13.978 --rc genhtml_legend=1 00:12:13.978 --rc geninfo_all_blocks=1 00:12:13.978 --rc geninfo_unexecuted_blocks=1 00:12:13.978 00:12:13.978 ' 00:12:13.978 16:06:32 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:13.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.978 --rc genhtml_branch_coverage=1 00:12:13.978 --rc genhtml_function_coverage=1 00:12:13.978 --rc genhtml_legend=1 00:12:13.978 --rc geninfo_all_blocks=1 00:12:13.978 --rc geninfo_unexecuted_blocks=1 00:12:13.978 00:12:13.978 ' 00:12:13.978 16:06:32 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:13.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.978 --rc genhtml_branch_coverage=1 00:12:13.978 --rc genhtml_function_coverage=1 00:12:13.978 --rc genhtml_legend=1 00:12:13.978 --rc geninfo_all_blocks=1 00:12:13.978 --rc geninfo_unexecuted_blocks=1 00:12:13.978 00:12:13.978 ' 00:12:13.978 16:06:32 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:14.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:14.805 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:14.805 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:14.805 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:14.805 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:14.805 16:06:33 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:12:14.805 16:06:33 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:12:14.805 16:06:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:12:14.805 16:06:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@233 -- # local class 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:14.805 16:06:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:12:15.065 16:06:33 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:15.065 16:06:33 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:12:15.065 16:06:33 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:15.065 16:06:33 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:15.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:15.581 Waiting for block devices as requested 00:12:15.581 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:15.840 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:15.840 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:16.097 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:21.369 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:21.369 16:06:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:12:21.369 16:06:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:21.626 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:12:21.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:21.883 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:12:22.141 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:12:22.708 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:22.708 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:22.708 16:06:41 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:12:22.708 16:06:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.708 16:06:41 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:12:22.708 16:06:41 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:22.708 16:06:41 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68151 00:12:22.708 16:06:41 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:12:22.708 16:06:41 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:22.708 16:06:41 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:22.708 16:06:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:12:22.709 16:06:41 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:22.709 16:06:41 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:22.709 16:06:41 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:22.709 16:06:41 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:22.709 16:06:41 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:12:22.709 16:06:41 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:22.709 16:06:41 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:22.709 16:06:41 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:12:22.709 16:06:41 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:22.709 16:06:41 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:22.966 Initializing NVMe Controllers 00:12:22.966 Attaching to 0000:00:10.0 00:12:22.966 Attaching to 0000:00:11.0 00:12:22.966 Attached to 0000:00:10.0 00:12:22.966 Attached to 0000:00:11.0 00:12:22.966 Initialization complete. Starting I/O... 00:12:22.966 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:22.966 QEMU NVMe Ctrl (12341 ): 2 I/Os completed (+2) 00:12:22.966 00:12:24.342 QEMU NVMe Ctrl (12340 ): 1456 I/Os completed (+1456) 00:12:24.342 QEMU NVMe Ctrl (12341 ): 1463 I/Os completed (+1461) 00:12:24.342 00:12:25.278 QEMU NVMe Ctrl (12340 ): 3235 I/Os completed (+1779) 00:12:25.278 QEMU NVMe Ctrl (12341 ): 3239 I/Os completed (+1776) 00:12:25.278 00:12:26.218 QEMU NVMe Ctrl (12340 ): 4979 I/Os completed (+1744) 00:12:26.218 QEMU NVMe Ctrl (12341 ): 5011 I/Os completed (+1772) 00:12:26.218 00:12:27.160 QEMU NVMe Ctrl (12340 ): 6751 I/Os completed (+1772) 00:12:27.160 QEMU NVMe Ctrl (12341 ): 6789 I/Os completed (+1778) 00:12:27.160 00:12:28.096 QEMU NVMe Ctrl (12340 ): 8503 I/Os completed (+1752) 00:12:28.096 QEMU NVMe Ctrl (12341 ): 8541 I/Os completed (+1752) 00:12:28.096 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:29.034 [2024-11-04 16:06:47.416603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:29.034 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:29.034 [2024-11-04 16:06:47.419226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 [2024-11-04 16:06:47.419302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 [2024-11-04 16:06:47.419332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 [2024-11-04 16:06:47.419360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:29.034 [2024-11-04 16:06:47.423153] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 [2024-11-04 16:06:47.423217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 [2024-11-04 16:06:47.423247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 [2024-11-04 16:06:47.423274] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:29.034 [2024-11-04 16:06:47.461106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:29.034 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:29.034 [2024-11-04 16:06:47.463477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 [2024-11-04 16:06:47.463525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 [2024-11-04 16:06:47.463561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 [2024-11-04 16:06:47.463593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:29.034 [2024-11-04 16:06:47.466911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 [2024-11-04 16:06:47.466968] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 [2024-11-04 16:06:47.467000] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 [2024-11-04 16:06:47.467019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:29.034 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:29.034 EAL: Scan for (pci) bus failed. 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:29.034 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:29.034 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:29.034 Attaching to 0000:00:10.0 00:12:29.034 Attached to 0000:00:10.0 00:12:29.293 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:29.293 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:29.293 16:06:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:29.293 Attaching to 0000:00:11.0 00:12:29.293 Attached to 0000:00:11.0 00:12:30.230 QEMU NVMe Ctrl (12340 ): 1754 I/Os completed (+1754) 00:12:30.230 QEMU NVMe Ctrl (12341 ): 1552 I/Os completed (+1552) 00:12:30.230 00:12:31.166 QEMU NVMe Ctrl (12340 ): 3610 I/Os completed (+1856) 00:12:31.166 QEMU NVMe Ctrl (12341 ): 3411 I/Os completed (+1859) 00:12:31.166 00:12:32.101 QEMU NVMe Ctrl (12340 ): 5358 I/Os completed (+1748) 00:12:32.101 QEMU NVMe Ctrl (12341 ): 5160 I/Os completed (+1749) 00:12:32.101 00:12:33.037 QEMU NVMe Ctrl (12340 ): 7250 I/Os completed (+1892) 00:12:33.037 QEMU NVMe Ctrl (12341 ): 7054 I/Os completed (+1894) 00:12:33.037 00:12:33.974 QEMU NVMe Ctrl (12340 ): 9122 I/Os completed (+1872) 00:12:33.974 QEMU NVMe Ctrl (12341 ): 8935 I/Os completed (+1881) 00:12:33.974 00:12:34.942 QEMU NVMe Ctrl (12340 ): 10953 I/Os completed (+1831) 00:12:34.942 QEMU NVMe Ctrl (12341 ): 10765 I/Os completed (+1830) 00:12:34.942 00:12:36.319 QEMU NVMe Ctrl (12340 ): 13193 I/Os completed (+2240) 00:12:36.319 QEMU NVMe Ctrl (12341 ): 13008 I/Os completed (+2243) 00:12:36.319 00:12:37.255 QEMU NVMe Ctrl (12340 ): 15381 I/Os completed (+2188) 00:12:37.255 QEMU NVMe Ctrl (12341 ): 15196 I/Os completed (+2188) 00:12:37.255 00:12:38.191 QEMU NVMe Ctrl (12340 ): 17549 I/Os completed (+2168) 00:12:38.191 QEMU NVMe Ctrl (12341 ): 17369 I/Os completed (+2173) 00:12:38.191 00:12:39.127 QEMU NVMe Ctrl (12340 ): 19741 I/Os completed (+2192) 00:12:39.127 QEMU NVMe Ctrl (12341 ): 19561 I/Os completed (+2192) 00:12:39.127 00:12:40.064 QEMU NVMe Ctrl (12340 ): 21949 I/Os completed (+2208) 00:12:40.064 QEMU NVMe Ctrl (12341 ): 21769 I/Os completed (+2208) 00:12:40.064 00:12:41.000 QEMU NVMe Ctrl (12340 ): 24141 I/Os completed (+2192) 00:12:41.000 QEMU NVMe Ctrl (12341 ): 23961 I/Os completed (+2192) 00:12:41.000 00:12:41.260 16:06:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:41.260 16:06:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:41.260 16:06:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:41.260 16:06:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:41.260 [2024-11-04 16:06:59.835261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:41.260 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:41.260 [2024-11-04 16:06:59.836934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 [2024-11-04 16:06:59.836995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 [2024-11-04 16:06:59.837017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 [2024-11-04 16:06:59.837040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:41.260 [2024-11-04 16:06:59.839766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 [2024-11-04 16:06:59.839823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 [2024-11-04 16:06:59.839842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 [2024-11-04 16:06:59.839865] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:12:41.260 EAL: Scan for (pci) bus failed. 00:12:41.260 16:06:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:41.260 16:06:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:41.260 [2024-11-04 16:06:59.877449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:41.260 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:41.260 [2024-11-04 16:06:59.878988] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 [2024-11-04 16:06:59.879035] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 [2024-11-04 16:06:59.879064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 [2024-11-04 16:06:59.879083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:41.260 [2024-11-04 16:06:59.881556] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 [2024-11-04 16:06:59.881593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 [2024-11-04 16:06:59.881613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 [2024-11-04 16:06:59.881632] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.260 16:06:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:41.260 16:06:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:41.519 16:06:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:41.519 16:06:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:41.519 16:06:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:41.519 16:07:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:41.519 16:07:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:41.519 16:07:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:41.519 16:07:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:41.519 16:07:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:41.519 Attaching to 0000:00:10.0 00:12:41.519 Attached to 0000:00:10.0 00:12:41.519 16:07:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:41.519 16:07:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:41.519 16:07:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:41.519 Attaching to 0000:00:11.0 00:12:41.519 Attached to 0000:00:11.0 00:12:42.086 QEMU NVMe Ctrl (12340 ): 1188 I/Os completed (+1188) 00:12:42.086 QEMU NVMe Ctrl (12341 ): 948 I/Os completed (+948) 00:12:42.086 00:12:43.022 QEMU NVMe Ctrl (12340 ): 3440 I/Os completed (+2252) 00:12:43.022 QEMU NVMe Ctrl (12341 ): 3200 I/Os completed (+2252) 00:12:43.022 00:12:43.960 QEMU NVMe Ctrl (12340 ): 5672 I/Os completed (+2232) 00:12:43.960 QEMU NVMe Ctrl (12341 ): 5432 I/Os completed (+2232) 00:12:43.960 00:12:45.344 QEMU NVMe Ctrl (12340 ): 7908 I/Os completed (+2236) 00:12:45.344 QEMU NVMe Ctrl (12341 ): 7671 I/Os completed (+2239) 00:12:45.344 00:12:45.925 QEMU NVMe Ctrl (12340 ): 10036 I/Os completed (+2128) 00:12:45.925 QEMU NVMe Ctrl (12341 ): 9802 I/Os completed (+2131) 00:12:45.925 00:12:47.330 QEMU NVMe Ctrl (12340 ): 12268 I/Os completed (+2232) 00:12:47.330 QEMU NVMe Ctrl (12341 ): 12034 I/Os completed (+2232) 00:12:47.330 00:12:48.269 QEMU NVMe Ctrl (12340 ): 14504 I/Os completed (+2236) 00:12:48.269 QEMU NVMe Ctrl (12341 ): 14272 I/Os completed (+2238) 00:12:48.269 00:12:49.209 QEMU NVMe Ctrl (12340 ): 16740 I/Os completed (+2236) 00:12:49.209 QEMU NVMe Ctrl (12341 ): 16508 I/Os completed (+2236) 00:12:49.209 00:12:50.152 QEMU NVMe Ctrl (12340 ): 18984 I/Os completed (+2244) 00:12:50.152 QEMU NVMe Ctrl (12341 ): 18754 I/Os completed (+2246) 00:12:50.152 00:12:51.089 QEMU NVMe Ctrl (12340 ): 21236 I/Os completed (+2252) 00:12:51.089 QEMU NVMe Ctrl (12341 ): 21006 I/Os completed (+2252) 00:12:51.089 00:12:52.029 QEMU NVMe Ctrl (12340 ): 23384 I/Os completed (+2148) 00:12:52.029 QEMU NVMe Ctrl (12341 ): 23166 I/Os completed (+2160) 00:12:52.029 00:12:52.964 QEMU NVMe Ctrl (12340 ): 25612 I/Os completed (+2228) 00:12:52.964 QEMU NVMe Ctrl (12341 ): 25394 I/Os completed (+2228) 00:12:52.964 00:12:53.531 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:53.532 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:53.532 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:53.532 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:53.532 [2024-11-04 16:07:12.209387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:53.532 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:53.532 [2024-11-04 16:07:12.211218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.532 [2024-11-04 16:07:12.211275] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.532 [2024-11-04 16:07:12.211297] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.532 [2024-11-04 16:07:12.211319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.532 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:53.532 [2024-11-04 16:07:12.214124] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.532 [2024-11-04 16:07:12.214172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.532 [2024-11-04 16:07:12.214190] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.532 [2024-11-04 16:07:12.214208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.532 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:53.532 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:53.532 [2024-11-04 16:07:12.250757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:53.532 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:53.532 [2024-11-04 16:07:12.252357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.532 [2024-11-04 16:07:12.252446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.532 [2024-11-04 16:07:12.252495] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.532 [2024-11-04 16:07:12.252538] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.790 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:53.790 [2024-11-04 16:07:12.255205] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.790 [2024-11-04 16:07:12.255251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.790 [2024-11-04 16:07:12.255275] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.790 [2024-11-04 16:07:12.255295] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.790 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:53.790 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:53.790 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:53.790 EAL: Scan for (pci) bus failed. 00:12:53.790 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:53.790 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:53.790 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:53.790 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:53.790 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:53.790 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:53.790 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:53.790 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:53.790 Attaching to 0000:00:10.0 00:12:53.790 Attached to 0000:00:10.0 00:12:54.049 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:54.049 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:54.049 16:07:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:54.049 Attaching to 0000:00:11.0 00:12:54.049 Attached to 0000:00:11.0 00:12:54.049 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:54.049 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:54.049 [2024-11-04 16:07:12.587869] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:13:06.295 16:07:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:06.295 16:07:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:06.295 16:07:24 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.17 00:13:06.295 16:07:24 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.17 00:13:06.295 16:07:24 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:06.295 16:07:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.17 00:13:06.295 16:07:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.17 2 00:13:06.295 remove_attach_helper took 43.17s to complete (handling 2 nvme drive(s)) 16:07:24 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:13:12.860 16:07:30 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68151 00:13:12.860 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68151) - No such process 00:13:12.860 16:07:30 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68151 00:13:12.860 16:07:30 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:13:12.860 16:07:30 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:13:12.860 16:07:30 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:13:12.860 16:07:30 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68699 00:13:12.860 16:07:30 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:12.860 16:07:30 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:12.860 16:07:30 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68699 00:13:12.860 16:07:30 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 68699 ']' 00:13:12.860 16:07:30 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.860 16:07:30 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:12.860 16:07:30 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.860 16:07:30 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:12.860 16:07:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:12.860 [2024-11-04 16:07:30.698497] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:13:12.860 [2024-11-04 16:07:30.698811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68699 ] 00:13:12.860 [2024-11-04 16:07:30.865205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.860 [2024-11-04 16:07:30.974415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.425 16:07:31 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:13.425 16:07:31 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:13:13.425 16:07:31 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:13.425 16:07:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.425 16:07:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:13.425 16:07:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.425 16:07:31 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:13:13.425 16:07:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:13.425 16:07:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:13.425 16:07:31 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:13:13.425 16:07:31 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:13:13.425 16:07:31 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:13:13.425 16:07:31 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:13:13.425 16:07:31 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:13:13.425 16:07:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:13.425 16:07:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:13.425 16:07:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:13.425 16:07:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:13.425 16:07:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:19.987 16:07:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:19.987 16:07:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:19.987 16:07:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:19.987 16:07:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:19.987 16:07:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:19.987 16:07:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:19.987 16:07:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:19.987 16:07:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:19.987 16:07:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:19.987 16:07:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:19.987 16:07:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:19.987 16:07:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.987 16:07:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:19.987 [2024-11-04 16:07:37.986155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:19.987 [2024-11-04 16:07:37.988704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.987 [2024-11-04 16:07:37.988814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.987 [2024-11-04 16:07:37.988889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.987 [2024-11-04 16:07:37.988987] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.987 [2024-11-04 16:07:37.989029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.987 [2024-11-04 16:07:37.989083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.987 [2024-11-04 16:07:37.989133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.987 [2024-11-04 16:07:37.989168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.987 [2024-11-04 16:07:37.989217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.987 [2024-11-04 16:07:37.989364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.987 [2024-11-04 16:07:37.989404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.987 [2024-11-04 16:07:37.989462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.987 16:07:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.987 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:19.987 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:19.987 [2024-11-04 16:07:38.385463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:19.987 [2024-11-04 16:07:38.388002] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.987 [2024-11-04 16:07:38.388176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.987 [2024-11-04 16:07:38.388282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.987 [2024-11-04 16:07:38.388310] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.987 [2024-11-04 16:07:38.388325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.987 [2024-11-04 16:07:38.388338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.987 [2024-11-04 16:07:38.388354] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.987 [2024-11-04 16:07:38.388365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.987 [2024-11-04 16:07:38.388379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.987 [2024-11-04 16:07:38.388393] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.987 [2024-11-04 16:07:38.388406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.987 [2024-11-04 16:07:38.388418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.987 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:19.987 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:19.987 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:19.987 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:19.987 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:19.987 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:19.987 16:07:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.987 16:07:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:19.987 16:07:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.987 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:19.987 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:19.987 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:19.987 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:19.987 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:20.245 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:20.245 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:20.245 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:20.245 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:20.245 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:20.245 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:20.245 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:20.245 16:07:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:32.441 16:07:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:32.441 16:07:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:32.441 16:07:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:32.441 16:07:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:32.441 16:07:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:32.441 16:07:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:32.441 16:07:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.441 16:07:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:32.441 16:07:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.441 16:07:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:32.441 16:07:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:32.441 16:07:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:32.441 16:07:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:32.441 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:32.441 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:32.441 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:32.441 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:32.441 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:32.441 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:32.441 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:32.441 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:32.441 16:07:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.441 16:07:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:32.441 [2024-11-04 16:07:51.065065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:32.441 [2024-11-04 16:07:51.067526] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.441 [2024-11-04 16:07:51.067575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:32.441 [2024-11-04 16:07:51.067592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:32.441 [2024-11-04 16:07:51.067619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.441 [2024-11-04 16:07:51.067631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:32.441 [2024-11-04 16:07:51.067646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:32.441 [2024-11-04 16:07:51.067659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.441 [2024-11-04 16:07:51.067673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:32.441 [2024-11-04 16:07:51.067684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:32.441 [2024-11-04 16:07:51.067699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:32.441 [2024-11-04 16:07:51.067710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:32.441 [2024-11-04 16:07:51.067725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:32.441 16:07:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.441 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:32.441 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:33.007 [2024-11-04 16:07:51.564255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:33.007 [2024-11-04 16:07:51.566679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.007 [2024-11-04 16:07:51.566721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.007 [2024-11-04 16:07:51.566760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.007 [2024-11-04 16:07:51.566798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.007 [2024-11-04 16:07:51.566812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.007 [2024-11-04 16:07:51.566825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.007 [2024-11-04 16:07:51.566840] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.007 [2024-11-04 16:07:51.566852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.007 [2024-11-04 16:07:51.566870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.007 [2024-11-04 16:07:51.566890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:33.007 [2024-11-04 16:07:51.566904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.007 [2024-11-04 16:07:51.566915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.007 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:33.007 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:33.007 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:33.007 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:33.007 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:33.007 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:33.007 16:07:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.007 16:07:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:33.007 16:07:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.007 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:33.007 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:33.265 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:33.265 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:33.265 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:33.265 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:33.265 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:33.265 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:33.265 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:33.265 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:33.265 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:33.265 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:33.265 16:07:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:45.572 16:08:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:45.572 16:08:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:45.572 16:08:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:45.572 16:08:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:45.572 16:08:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:45.572 16:08:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:45.572 16:08:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.572 16:08:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:45.572 16:08:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:45.572 [2024-11-04 16:08:04.044186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:45.572 [2024-11-04 16:08:04.047129] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.572 [2024-11-04 16:08:04.047291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.572 [2024-11-04 16:08:04.047440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.572 [2024-11-04 16:08:04.047510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.572 [2024-11-04 16:08:04.047606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.572 [2024-11-04 16:08:04.047708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.572 [2024-11-04 16:08:04.047773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.572 [2024-11-04 16:08:04.047810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.572 [2024-11-04 16:08:04.047860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.572 [2024-11-04 16:08:04.047914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.572 [2024-11-04 16:08:04.048111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.572 [2024-11-04 16:08:04.048172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:45.572 16:08:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.572 16:08:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:45.572 16:08:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:45.572 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:45.831 [2024-11-04 16:08:04.543358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:45.831 [2024-11-04 16:08:04.545951] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.831 [2024-11-04 16:08:04.545993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.831 [2024-11-04 16:08:04.546013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.831 [2024-11-04 16:08:04.546036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.831 [2024-11-04 16:08:04.546050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.831 [2024-11-04 16:08:04.546062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.831 [2024-11-04 16:08:04.546078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.831 [2024-11-04 16:08:04.546089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.831 [2024-11-04 16:08:04.546105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.831 [2024-11-04 16:08:04.546118] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:45.831 [2024-11-04 16:08:04.546131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.831 [2024-11-04 16:08:04.546143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:46.089 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:46.089 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:46.089 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:46.089 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:46.089 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:46.089 16:08:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.089 16:08:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:46.089 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:46.089 16:08:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.089 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:46.089 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:46.089 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:46.089 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:46.089 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:46.347 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:46.347 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:46.347 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:46.347 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:46.347 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:46.347 16:08:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:46.347 16:08:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:46.347 16:08:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:58.547 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:58.547 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:58.547 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:58.547 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:58.547 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:58.547 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:58.547 16:08:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.547 16:08:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:58.547 16:08:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.547 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:58.547 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:58.547 16:08:17 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.17 00:13:58.547 16:08:17 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.17 00:13:58.547 16:08:17 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:58.547 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:13:58.547 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:13:58.547 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:58.547 16:08:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.547 16:08:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:58.548 16:08:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.548 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:58.548 16:08:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.548 16:08:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:58.548 16:08:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.548 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:58.548 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:58.548 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:58.548 16:08:17 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:13:58.548 16:08:17 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:13:58.548 16:08:17 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:13:58.548 16:08:17 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:13:58.548 16:08:17 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:13:58.548 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:58.548 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:58.548 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:58.548 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:58.548 16:08:17 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:05.107 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:05.107 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:05.107 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:05.107 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:05.107 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:05.107 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:05.107 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:05.108 16:08:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.108 16:08:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:05.108 [2024-11-04 16:08:23.188724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:05.108 [2024-11-04 16:08:23.190544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.108 [2024-11-04 16:08:23.190599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.108 [2024-11-04 16:08:23.190617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.108 [2024-11-04 16:08:23.190644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.108 [2024-11-04 16:08:23.190656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.108 [2024-11-04 16:08:23.190670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.108 [2024-11-04 16:08:23.190684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.108 [2024-11-04 16:08:23.190698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.108 [2024-11-04 16:08:23.190709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.108 [2024-11-04 16:08:23.190724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.108 [2024-11-04 16:08:23.190736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.108 [2024-11-04 16:08:23.190767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.108 16:08:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:05.108 [2024-11-04 16:08:23.588068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:05.108 [2024-11-04 16:08:23.590204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.108 [2024-11-04 16:08:23.590245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.108 [2024-11-04 16:08:23.590265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.108 [2024-11-04 16:08:23.590288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.108 [2024-11-04 16:08:23.590302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.108 [2024-11-04 16:08:23.590315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.108 [2024-11-04 16:08:23.590331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.108 [2024-11-04 16:08:23.590342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.108 [2024-11-04 16:08:23.590356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.108 [2024-11-04 16:08:23.590378] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.108 [2024-11-04 16:08:23.590392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.108 [2024-11-04 16:08:23.590404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:05.108 16:08:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:05.108 16:08:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:05.108 16:08:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:05.108 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:05.367 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:05.367 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:05.367 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:05.367 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:05.367 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:05.367 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:05.367 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:05.367 16:08:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:05.367 16:08:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:05.822 16:08:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:05.822 16:08:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:18.033 16:08:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.033 16:08:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:18.033 16:08:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:18.033 16:08:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.033 16:08:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:18.033 [2024-11-04 16:08:36.267678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:18.033 [2024-11-04 16:08:36.269288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.033 [2024-11-04 16:08:36.269337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.033 [2024-11-04 16:08:36.269354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.033 [2024-11-04 16:08:36.269380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.033 [2024-11-04 16:08:36.269393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.033 [2024-11-04 16:08:36.269407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.033 [2024-11-04 16:08:36.269420] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.033 [2024-11-04 16:08:36.269434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.033 [2024-11-04 16:08:36.269446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.033 [2024-11-04 16:08:36.269461] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.033 [2024-11-04 16:08:36.269475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.033 [2024-11-04 16:08:36.269489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.033 16:08:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:18.033 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:18.033 [2024-11-04 16:08:36.667030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:18.033 [2024-11-04 16:08:36.668633] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.033 [2024-11-04 16:08:36.668675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.033 [2024-11-04 16:08:36.668694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.033 [2024-11-04 16:08:36.668715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.033 [2024-11-04 16:08:36.668733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.033 [2024-11-04 16:08:36.668765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.033 [2024-11-04 16:08:36.668783] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.033 [2024-11-04 16:08:36.668794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.033 [2024-11-04 16:08:36.668808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.033 [2024-11-04 16:08:36.668822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:18.033 [2024-11-04 16:08:36.668835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.033 [2024-11-04 16:08:36.668847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.291 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:18.291 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:18.291 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:18.291 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:18.291 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:18.291 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:18.291 16:08:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.291 16:08:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:18.291 16:08:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.291 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:18.291 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:18.291 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:18.291 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:18.291 16:08:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:18.549 16:08:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:18.549 16:08:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:18.549 16:08:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:18.549 16:08:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:18.549 16:08:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:18.549 16:08:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:18.549 16:08:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:18.549 16:08:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:30.753 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:30.754 16:08:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.754 16:08:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:30.754 16:08:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:30.754 [2024-11-04 16:08:49.246797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:30.754 [2024-11-04 16:08:49.248511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.754 [2024-11-04 16:08:49.248559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.754 [2024-11-04 16:08:49.248576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.754 [2024-11-04 16:08:49.248603] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.754 [2024-11-04 16:08:49.248615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.754 [2024-11-04 16:08:49.248632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.754 [2024-11-04 16:08:49.248645] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.754 [2024-11-04 16:08:49.248662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.754 [2024-11-04 16:08:49.248674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.754 [2024-11-04 16:08:49.248689] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.754 [2024-11-04 16:08:49.248700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.754 [2024-11-04 16:08:49.248714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:30.754 16:08:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.754 16:08:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:30.754 16:08:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:30.754 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:31.321 [2024-11-04 16:08:49.745980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:31.322 [2024-11-04 16:08:49.747578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.322 [2024-11-04 16:08:49.747624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.322 [2024-11-04 16:08:49.747644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.322 [2024-11-04 16:08:49.747666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.322 [2024-11-04 16:08:49.747680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.322 [2024-11-04 16:08:49.747693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.322 [2024-11-04 16:08:49.747708] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.322 [2024-11-04 16:08:49.747719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.322 [2024-11-04 16:08:49.747734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.322 [2024-11-04 16:08:49.747768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.322 [2024-11-04 16:08:49.747787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.322 [2024-11-04 16:08:49.747799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.322 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:31.322 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:31.322 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:31.322 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:31.322 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:31.322 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:31.322 16:08:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.322 16:08:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:31.322 16:08:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.322 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:31.322 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:31.322 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:31.322 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:31.322 16:08:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:31.581 16:08:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:31.581 16:08:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:31.581 16:08:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:31.581 16:08:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:31.581 16:08:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:31.581 16:08:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:31.581 16:08:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:31.581 16:08:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:43.828 16:09:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:43.828 16:09:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:43.828 16:09:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:43.828 16:09:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:43.828 16:09:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:43.828 16:09:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.828 16:09:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.828 16:09:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.12 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.12 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:14:43.828 16:09:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.12 00:14:43.828 16:09:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.12 2 00:14:43.828 remove_attach_helper took 45.12s to complete (handling 2 nvme drive(s)) 16:09:02 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:43.828 16:09:02 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68699 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 68699 ']' 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 68699 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68699 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:43.828 killing process with pid 68699 00:14:43.828 16:09:02 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68699' 00:14:43.829 16:09:02 sw_hotplug -- common/autotest_common.sh@971 -- # kill 68699 00:14:43.829 16:09:02 sw_hotplug -- common/autotest_common.sh@976 -- # wait 68699 00:14:46.361 16:09:04 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:46.620 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:47.187 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:47.187 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:47.187 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:47.188 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:47.446 00:14:47.446 real 2m33.556s 00:14:47.446 user 1m51.281s 00:14:47.446 sys 0m22.516s 00:14:47.446 16:09:05 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:47.446 16:09:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:47.446 ************************************ 00:14:47.446 END TEST sw_hotplug 00:14:47.446 ************************************ 00:14:47.446 16:09:06 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:14:47.446 16:09:06 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:47.446 16:09:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:47.446 16:09:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:47.446 16:09:06 -- common/autotest_common.sh@10 -- # set +x 00:14:47.446 ************************************ 00:14:47.446 START TEST nvme_xnvme 00:14:47.446 ************************************ 00:14:47.446 16:09:06 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:47.446 * Looking for test storage... 00:14:47.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:47.446 16:09:06 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:47.446 16:09:06 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:14:47.446 16:09:06 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:47.706 16:09:06 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:47.706 16:09:06 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.706 16:09:06 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:47.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.706 --rc genhtml_branch_coverage=1 00:14:47.706 --rc genhtml_function_coverage=1 00:14:47.706 --rc genhtml_legend=1 00:14:47.706 --rc geninfo_all_blocks=1 00:14:47.706 --rc geninfo_unexecuted_blocks=1 00:14:47.706 00:14:47.706 ' 00:14:47.706 16:09:06 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:47.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.706 --rc genhtml_branch_coverage=1 00:14:47.706 --rc genhtml_function_coverage=1 00:14:47.706 --rc genhtml_legend=1 00:14:47.706 --rc geninfo_all_blocks=1 00:14:47.706 --rc geninfo_unexecuted_blocks=1 00:14:47.706 00:14:47.706 ' 00:14:47.706 16:09:06 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:47.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.706 --rc genhtml_branch_coverage=1 00:14:47.706 --rc genhtml_function_coverage=1 00:14:47.706 --rc genhtml_legend=1 00:14:47.706 --rc geninfo_all_blocks=1 00:14:47.706 --rc geninfo_unexecuted_blocks=1 00:14:47.706 00:14:47.706 ' 00:14:47.706 16:09:06 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:47.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.706 --rc genhtml_branch_coverage=1 00:14:47.706 --rc genhtml_function_coverage=1 00:14:47.706 --rc genhtml_legend=1 00:14:47.706 --rc geninfo_all_blocks=1 00:14:47.706 --rc geninfo_unexecuted_blocks=1 00:14:47.706 00:14:47.706 ' 00:14:47.706 16:09:06 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.706 16:09:06 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.706 16:09:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.706 16:09:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.706 16:09:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.706 16:09:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:47.706 16:09:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.706 16:09:06 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:14:47.706 16:09:06 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:47.706 16:09:06 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:47.706 16:09:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.706 ************************************ 00:14:47.706 START TEST xnvme_to_malloc_dd_copy 00:14:47.706 ************************************ 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:47.706 16:09:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:47.706 { 00:14:47.706 "subsystems": [ 00:14:47.706 { 00:14:47.706 "subsystem": "bdev", 00:14:47.706 "config": [ 00:14:47.706 { 00:14:47.706 "params": { 00:14:47.706 "block_size": 512, 00:14:47.706 "num_blocks": 2097152, 00:14:47.706 "name": "malloc0" 00:14:47.706 }, 00:14:47.706 "method": "bdev_malloc_create" 00:14:47.706 }, 00:14:47.706 { 00:14:47.706 "params": { 00:14:47.706 "io_mechanism": "libaio", 00:14:47.706 "filename": "/dev/nullb0", 00:14:47.706 "name": "null0" 00:14:47.706 }, 00:14:47.706 "method": "bdev_xnvme_create" 00:14:47.706 }, 00:14:47.706 { 00:14:47.706 "method": "bdev_wait_for_examine" 00:14:47.706 } 00:14:47.707 ] 00:14:47.707 } 00:14:47.707 ] 00:14:47.707 } 00:14:47.707 [2024-11-04 16:09:06.389744] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:14:47.707 [2024-11-04 16:09:06.389881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70075 ] 00:14:47.966 [2024-11-04 16:09:06.568994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.966 [2024-11-04 16:09:06.676200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.504  [2024-11-04T16:09:10.159Z] Copying: 254/1024 [MB] (254 MBps) [2024-11-04T16:09:11.094Z] Copying: 507/1024 [MB] (253 MBps) [2024-11-04T16:09:12.470Z] Copying: 760/1024 [MB] (253 MBps) [2024-11-04T16:09:16.655Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:14:57.933 00:14:57.933 16:09:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:57.933 16:09:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:57.933 16:09:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:57.933 16:09:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:57.933 { 00:14:57.933 "subsystems": [ 00:14:57.933 { 00:14:57.933 "subsystem": "bdev", 00:14:57.933 "config": [ 00:14:57.933 { 00:14:57.933 "params": { 00:14:57.933 "block_size": 512, 00:14:57.933 "num_blocks": 2097152, 00:14:57.933 "name": "malloc0" 00:14:57.933 }, 00:14:57.933 "method": "bdev_malloc_create" 00:14:57.933 }, 00:14:57.933 { 00:14:57.933 "params": { 00:14:57.933 "io_mechanism": "libaio", 00:14:57.933 "filename": "/dev/nullb0", 00:14:57.933 "name": "null0" 00:14:57.933 }, 00:14:57.933 "method": "bdev_xnvme_create" 00:14:57.933 }, 00:14:57.933 { 00:14:57.933 "method": "bdev_wait_for_examine" 00:14:57.933 } 00:14:57.933 ] 00:14:57.933 } 00:14:57.933 ] 00:14:57.933 } 00:14:57.933 [2024-11-04 16:09:16.063808] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:14:57.933 [2024-11-04 16:09:16.063941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70180 ] 00:14:57.933 [2024-11-04 16:09:16.243478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.933 [2024-11-04 16:09:16.350850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.463  [2024-11-04T16:09:19.752Z] Copying: 266/1024 [MB] (266 MBps) [2024-11-04T16:09:21.127Z] Copying: 530/1024 [MB] (263 MBps) [2024-11-04T16:09:21.694Z] Copying: 793/1024 [MB] (263 MBps) [2024-11-04T16:09:25.881Z] Copying: 1024/1024 [MB] (average 263 MBps) 00:15:07.159 00:15:07.159 16:09:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:15:07.159 16:09:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:07.159 16:09:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:15:07.159 16:09:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:15:07.159 16:09:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:07.159 16:09:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:07.159 { 00:15:07.159 "subsystems": [ 00:15:07.159 { 00:15:07.159 "subsystem": "bdev", 00:15:07.159 "config": [ 00:15:07.159 { 00:15:07.159 "params": { 00:15:07.159 "block_size": 512, 00:15:07.159 "num_blocks": 2097152, 00:15:07.159 "name": "malloc0" 00:15:07.159 }, 00:15:07.159 "method": "bdev_malloc_create" 00:15:07.159 }, 00:15:07.159 { 00:15:07.159 "params": { 00:15:07.159 "io_mechanism": "io_uring", 00:15:07.159 "filename": "/dev/nullb0", 00:15:07.159 "name": "null0" 00:15:07.159 }, 00:15:07.159 "method": "bdev_xnvme_create" 00:15:07.159 }, 00:15:07.159 { 00:15:07.159 "method": "bdev_wait_for_examine" 00:15:07.159 } 00:15:07.159 ] 00:15:07.159 } 00:15:07.159 ] 00:15:07.159 } 00:15:07.159 [2024-11-04 16:09:25.647719] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:15:07.159 [2024-11-04 16:09:25.647867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70294 ] 00:15:07.159 [2024-11-04 16:09:25.826509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.417 [2024-11-04 16:09:25.930697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.944  [2024-11-04T16:09:29.600Z] Copying: 278/1024 [MB] (278 MBps) [2024-11-04T16:09:30.533Z] Copying: 552/1024 [MB] (273 MBps) [2024-11-04T16:09:31.099Z] Copying: 827/1024 [MB] (274 MBps) [2024-11-04T16:09:35.282Z] Copying: 1024/1024 [MB] (average 275 MBps) 00:15:16.560 00:15:16.560 16:09:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:16.560 16:09:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:16.560 16:09:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:16.560 16:09:35 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:16.560 { 00:15:16.560 "subsystems": [ 00:15:16.560 { 00:15:16.560 "subsystem": "bdev", 00:15:16.560 "config": [ 00:15:16.560 { 00:15:16.560 "params": { 00:15:16.560 "block_size": 512, 00:15:16.560 "num_blocks": 2097152, 00:15:16.560 "name": "malloc0" 00:15:16.560 }, 00:15:16.560 "method": "bdev_malloc_create" 00:15:16.560 }, 00:15:16.560 { 00:15:16.560 "params": { 00:15:16.560 "io_mechanism": "io_uring", 00:15:16.560 "filename": "/dev/nullb0", 00:15:16.560 "name": "null0" 00:15:16.560 }, 00:15:16.560 "method": "bdev_xnvme_create" 00:15:16.560 }, 00:15:16.560 { 00:15:16.560 "method": "bdev_wait_for_examine" 00:15:16.560 } 00:15:16.560 ] 00:15:16.560 } 00:15:16.560 ] 00:15:16.560 } 00:15:16.560 [2024-11-04 16:09:35.091961] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:15:16.560 [2024-11-04 16:09:35.092085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70403 ] 00:15:16.560 [2024-11-04 16:09:35.274521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.817 [2024-11-04 16:09:35.391837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.353  [2024-11-04T16:09:39.032Z] Copying: 274/1024 [MB] (274 MBps) [2024-11-04T16:09:39.967Z] Copying: 550/1024 [MB] (276 MBps) [2024-11-04T16:09:40.532Z] Copying: 829/1024 [MB] (278 MBps) [2024-11-04T16:09:44.721Z] Copying: 1024/1024 [MB] (average 276 MBps) 00:15:25.999 00:15:25.999 16:09:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:15:25.999 16:09:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:15:25.999 00:15:25.999 real 0m38.190s 00:15:25.999 user 0m33.338s 00:15:25.999 sys 0m4.314s 00:15:25.999 16:09:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:25.999 ************************************ 00:15:25.999 END TEST xnvme_to_malloc_dd_copy 00:15:25.999 ************************************ 00:15:25.999 16:09:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:25.999 16:09:44 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:25.999 16:09:44 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:25.999 16:09:44 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:25.999 16:09:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.999 ************************************ 00:15:25.999 START TEST xnvme_bdevperf 00:15:25.999 ************************************ 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:25.999 16:09:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:25.999 { 00:15:25.999 "subsystems": [ 00:15:25.999 { 00:15:25.999 "subsystem": "bdev", 00:15:25.999 "config": [ 00:15:25.999 { 00:15:25.999 "params": { 00:15:25.999 "io_mechanism": "libaio", 00:15:25.999 "filename": "/dev/nullb0", 00:15:25.999 "name": "null0" 00:15:25.999 }, 00:15:25.999 "method": "bdev_xnvme_create" 00:15:25.999 }, 00:15:25.999 { 00:15:25.999 "method": "bdev_wait_for_examine" 00:15:25.999 } 00:15:25.999 ] 00:15:25.999 } 00:15:25.999 ] 00:15:25.999 } 00:15:25.999 [2024-11-04 16:09:44.635168] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:15:25.999 [2024-11-04 16:09:44.635278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70529 ] 00:15:26.257 [2024-11-04 16:09:44.816821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.257 [2024-11-04 16:09:44.935536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.823 Running I/O for 5 seconds... 00:15:28.690 155264.00 IOPS, 606.50 MiB/s [2024-11-04T16:09:48.346Z] 155264.00 IOPS, 606.50 MiB/s [2024-11-04T16:09:49.721Z] 155306.67 IOPS, 606.67 MiB/s [2024-11-04T16:09:50.656Z] 155328.00 IOPS, 606.75 MiB/s [2024-11-04T16:09:50.656Z] 155289.60 IOPS, 606.60 MiB/s 00:15:31.934 Latency(us) 00:15:31.934 [2024-11-04T16:09:50.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.934 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:31.934 null0 : 5.00 155233.79 606.38 0.00 0.00 409.88 393.15 1816.06 00:15:31.934 [2024-11-04T16:09:50.656Z] =================================================================================================================== 00:15:31.934 [2024-11-04T16:09:50.656Z] Total : 155233.79 606.38 0.00 0.00 409.88 393.15 1816.06 00:15:32.870 16:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:32.870 16:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:32.870 16:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:32.870 16:09:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:32.870 16:09:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:32.870 16:09:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:32.870 { 00:15:32.870 "subsystems": [ 00:15:32.870 { 00:15:32.870 "subsystem": "bdev", 00:15:32.870 "config": [ 00:15:32.870 { 00:15:32.870 "params": { 00:15:32.870 "io_mechanism": "io_uring", 00:15:32.870 "filename": "/dev/nullb0", 00:15:32.870 "name": "null0" 00:15:32.870 }, 00:15:32.870 "method": "bdev_xnvme_create" 00:15:32.870 }, 00:15:32.870 { 00:15:32.870 "method": "bdev_wait_for_examine" 00:15:32.870 } 00:15:32.870 ] 00:15:32.870 } 00:15:32.870 ] 00:15:32.870 } 00:15:32.870 [2024-11-04 16:09:51.519052] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:15:32.870 [2024-11-04 16:09:51.519188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70610 ] 00:15:33.129 [2024-11-04 16:09:51.696892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.129 [2024-11-04 16:09:51.809419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.696 Running I/O for 5 seconds... 00:15:35.567 200960.00 IOPS, 785.00 MiB/s [2024-11-04T16:09:55.226Z] 199968.00 IOPS, 781.12 MiB/s [2024-11-04T16:09:56.161Z] 199658.67 IOPS, 779.92 MiB/s [2024-11-04T16:09:57.539Z] 199504.00 IOPS, 779.31 MiB/s 00:15:38.817 Latency(us) 00:15:38.818 [2024-11-04T16:09:57.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.818 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:38.818 null0 : 5.00 199367.26 778.78 0.00 0.00 318.69 192.46 1697.62 00:15:38.818 [2024-11-04T16:09:57.540Z] =================================================================================================================== 00:15:38.818 [2024-11-04T16:09:57.540Z] Total : 199367.26 778.78 0.00 0.00 318.69 192.46 1697.62 00:15:39.753 16:09:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:15:39.753 16:09:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:15:39.753 00:15:39.753 real 0m13.774s 00:15:39.753 user 0m10.386s 00:15:39.753 sys 0m3.171s 00:15:39.753 16:09:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:39.753 16:09:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:39.753 ************************************ 00:15:39.753 END TEST xnvme_bdevperf 00:15:39.753 ************************************ 00:15:39.753 00:15:39.753 real 0m52.314s 00:15:39.753 user 0m43.892s 00:15:39.753 sys 0m7.677s 00:15:39.753 16:09:58 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:39.753 16:09:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.753 ************************************ 00:15:39.753 END TEST nvme_xnvme 00:15:39.753 ************************************ 00:15:39.753 16:09:58 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:39.753 16:09:58 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:39.753 16:09:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:39.753 16:09:58 -- common/autotest_common.sh@10 -- # set +x 00:15:39.753 ************************************ 00:15:39.753 START TEST blockdev_xnvme 00:15:39.753 ************************************ 00:15:39.753 16:09:58 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:40.012 * Looking for test storage... 00:15:40.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:40.012 16:09:58 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:40.012 16:09:58 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:15:40.012 16:09:58 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:40.012 16:09:58 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:40.012 16:09:58 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.012 16:09:58 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.012 16:09:58 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.012 16:09:58 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.012 16:09:58 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.012 16:09:58 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.013 16:09:58 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:40.013 16:09:58 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.013 16:09:58 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.013 --rc genhtml_branch_coverage=1 00:15:40.013 --rc genhtml_function_coverage=1 00:15:40.013 --rc genhtml_legend=1 00:15:40.013 --rc geninfo_all_blocks=1 00:15:40.013 --rc geninfo_unexecuted_blocks=1 00:15:40.013 00:15:40.013 ' 00:15:40.013 16:09:58 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.013 --rc genhtml_branch_coverage=1 00:15:40.013 --rc genhtml_function_coverage=1 00:15:40.013 --rc genhtml_legend=1 00:15:40.013 --rc geninfo_all_blocks=1 00:15:40.013 --rc geninfo_unexecuted_blocks=1 00:15:40.013 00:15:40.013 ' 00:15:40.013 16:09:58 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.013 --rc genhtml_branch_coverage=1 00:15:40.013 --rc genhtml_function_coverage=1 00:15:40.013 --rc genhtml_legend=1 00:15:40.013 --rc geninfo_all_blocks=1 00:15:40.013 --rc geninfo_unexecuted_blocks=1 00:15:40.013 00:15:40.013 ' 00:15:40.013 16:09:58 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:40.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.013 --rc genhtml_branch_coverage=1 00:15:40.013 --rc genhtml_function_coverage=1 00:15:40.013 --rc genhtml_legend=1 00:15:40.013 --rc geninfo_all_blocks=1 00:15:40.013 --rc geninfo_unexecuted_blocks=1 00:15:40.013 00:15:40.013 ' 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=70764 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:40.013 16:09:58 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 70764 00:15:40.013 16:09:58 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 70764 ']' 00:15:40.013 16:09:58 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.013 16:09:58 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:40.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.013 16:09:58 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.013 16:09:58 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:40.013 16:09:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.013 [2024-11-04 16:09:58.723212] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:15:40.013 [2024-11-04 16:09:58.723576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70764 ] 00:15:40.272 [2024-11-04 16:09:58.905795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.531 [2024-11-04 16:09:59.023053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.474 16:09:59 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:41.474 16:09:59 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:15:41.474 16:09:59 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:41.474 16:09:59 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:41.474 16:09:59 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:41.474 16:09:59 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:41.474 16:09:59 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:41.732 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:41.989 Waiting for block devices as requested 00:15:42.250 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:42.250 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:42.250 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:42.507 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:47.776 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.776 nvme0n1 00:15:47.776 nvme1n1 00:15:47.776 nvme2n1 00:15:47.776 nvme2n2 00:15:47.776 nvme2n3 00:15:47.776 nvme3n1 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.776 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.776 16:10:06 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.777 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:47.777 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:47.777 16:10:06 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.777 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:47.777 16:10:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.777 16:10:06 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.777 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:47.777 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "02d8ddcb-b386-4cd9-a3c4-252a158206b1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "02d8ddcb-b386-4cd9-a3c4-252a158206b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "feacf189-a430-4f94-841f-9c6f7840cd37"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "feacf189-a430-4f94-841f-9c6f7840cd37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "9dbf3aab-a6aa-4fb7-aa63-392f5e9c208b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9dbf3aab-a6aa-4fb7-aa63-392f5e9c208b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "8269cdeb-396b-43f4-a340-40caca412bc7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8269cdeb-396b-43f4-a340-40caca412bc7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "307b84d7-1d2b-4074-aa3e-ebbc63794309"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "307b84d7-1d2b-4074-aa3e-ebbc63794309",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1a27fb70-2551-4492-ac5c-5853d0440ab4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1a27fb70-2551-4492-ac5c-5853d0440ab4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:47.777 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:47.777 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:47.777 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:47.777 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:47.777 16:10:06 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 70764 00:15:47.777 16:10:06 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 70764 ']' 00:15:47.777 16:10:06 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 70764 00:15:47.777 16:10:06 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:15:47.777 16:10:06 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:47.777 16:10:06 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70764 00:15:47.777 killing process with pid 70764 00:15:47.777 16:10:06 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:47.777 16:10:06 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:47.777 16:10:06 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70764' 00:15:47.777 16:10:06 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 70764 00:15:47.777 16:10:06 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 70764 00:15:50.307 16:10:08 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:50.307 16:10:08 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:50.307 16:10:08 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:50.307 16:10:08 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:50.307 16:10:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.307 ************************************ 00:15:50.307 START TEST bdev_hello_world 00:15:50.307 ************************************ 00:15:50.307 16:10:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:50.307 [2024-11-04 16:10:08.956073] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:15:50.307 [2024-11-04 16:10:08.956200] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71140 ] 00:15:50.565 [2024-11-04 16:10:09.139742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.565 [2024-11-04 16:10:09.264294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.131 [2024-11-04 16:10:09.715449] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:51.131 [2024-11-04 16:10:09.715707] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:51.131 [2024-11-04 16:10:09.715738] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:51.132 [2024-11-04 16:10:09.717840] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:51.132 [2024-11-04 16:10:09.718094] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:51.132 [2024-11-04 16:10:09.718119] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:51.132 [2024-11-04 16:10:09.718410] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:51.132 00:15:51.132 [2024-11-04 16:10:09.718437] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:52.506 00:15:52.506 real 0m1.962s 00:15:52.506 user 0m1.593s 00:15:52.506 sys 0m0.252s 00:15:52.506 16:10:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:52.506 16:10:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:52.506 ************************************ 00:15:52.506 END TEST bdev_hello_world 00:15:52.506 ************************************ 00:15:52.506 16:10:10 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:52.506 16:10:10 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:52.506 16:10:10 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:52.506 16:10:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.506 ************************************ 00:15:52.506 START TEST bdev_bounds 00:15:52.506 ************************************ 00:15:52.506 16:10:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:15:52.506 16:10:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71182 00:15:52.506 16:10:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:52.506 Process bdevio pid: 71182 00:15:52.506 16:10:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:52.506 16:10:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71182' 00:15:52.506 16:10:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71182 00:15:52.506 16:10:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 71182 ']' 00:15:52.506 16:10:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.506 16:10:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:52.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.506 16:10:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.506 16:10:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:52.506 16:10:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:52.506 [2024-11-04 16:10:10.997130] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:15:52.506 [2024-11-04 16:10:10.997281] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71182 ] 00:15:52.506 [2024-11-04 16:10:11.164173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:52.764 [2024-11-04 16:10:11.290126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.764 [2024-11-04 16:10:11.290278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.764 [2024-11-04 16:10:11.290302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.330 16:10:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:53.330 16:10:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:15:53.330 16:10:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:53.330 I/O targets: 00:15:53.330 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:53.330 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:53.330 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:53.330 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:53.330 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:53.330 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:53.330 00:15:53.330 00:15:53.330 CUnit - A unit testing framework for C - Version 2.1-3 00:15:53.330 http://cunit.sourceforge.net/ 00:15:53.330 00:15:53.330 00:15:53.330 Suite: bdevio tests on: nvme3n1 00:15:53.330 Test: blockdev write read block ...passed 00:15:53.330 Test: blockdev write zeroes read block ...passed 00:15:53.330 Test: blockdev write zeroes read no split ...passed 00:15:53.330 Test: blockdev write zeroes read split ...passed 00:15:53.330 Test: blockdev write zeroes read split partial ...passed 00:15:53.330 Test: blockdev reset ...passed 00:15:53.330 Test: blockdev write read 8 blocks ...passed 00:15:53.330 Test: blockdev write read size > 128k ...passed 00:15:53.330 Test: blockdev write read invalid size ...passed 00:15:53.330 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:53.330 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:53.330 Test: blockdev write read max offset ...passed 00:15:53.330 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.330 Test: blockdev writev readv 8 blocks ...passed 00:15:53.330 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.330 Test: blockdev writev readv block ...passed 00:15:53.330 Test: blockdev writev readv size > 128k ...passed 00:15:53.330 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.330 Test: blockdev comparev and writev ...passed 00:15:53.330 Test: blockdev nvme passthru rw ...passed 00:15:53.330 Test: blockdev nvme passthru vendor specific ...passed 00:15:53.330 Test: blockdev nvme admin passthru ...passed 00:15:53.330 Test: blockdev copy ...passed 00:15:53.330 Suite: bdevio tests on: nvme2n3 00:15:53.330 Test: blockdev write read block ...passed 00:15:53.330 Test: blockdev write zeroes read block ...passed 00:15:53.330 Test: blockdev write zeroes read no split ...passed 00:15:53.589 Test: blockdev write zeroes read split ...passed 00:15:53.589 Test: blockdev write zeroes read split partial ...passed 00:15:53.589 Test: blockdev reset ...passed 00:15:53.589 Test: blockdev write read 8 blocks ...passed 00:15:53.589 Test: blockdev write read size > 128k ...passed 00:15:53.589 Test: blockdev write read invalid size ...passed 00:15:53.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:53.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:53.589 Test: blockdev write read max offset ...passed 00:15:53.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.589 Test: blockdev writev readv 8 blocks ...passed 00:15:53.589 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.589 Test: blockdev writev readv block ...passed 00:15:53.589 Test: blockdev writev readv size > 128k ...passed 00:15:53.589 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.589 Test: blockdev comparev and writev ...passed 00:15:53.589 Test: blockdev nvme passthru rw ...passed 00:15:53.589 Test: blockdev nvme passthru vendor specific ...passed 00:15:53.589 Test: blockdev nvme admin passthru ...passed 00:15:53.589 Test: blockdev copy ...passed 00:15:53.589 Suite: bdevio tests on: nvme2n2 00:15:53.589 Test: blockdev write read block ...passed 00:15:53.589 Test: blockdev write zeroes read block ...passed 00:15:53.589 Test: blockdev write zeroes read no split ...passed 00:15:53.589 Test: blockdev write zeroes read split ...passed 00:15:53.589 Test: blockdev write zeroes read split partial ...passed 00:15:53.589 Test: blockdev reset ...passed 00:15:53.589 Test: blockdev write read 8 blocks ...passed 00:15:53.589 Test: blockdev write read size > 128k ...passed 00:15:53.589 Test: blockdev write read invalid size ...passed 00:15:53.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:53.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:53.589 Test: blockdev write read max offset ...passed 00:15:53.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.589 Test: blockdev writev readv 8 blocks ...passed 00:15:53.589 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.589 Test: blockdev writev readv block ...passed 00:15:53.589 Test: blockdev writev readv size > 128k ...passed 00:15:53.589 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.589 Test: blockdev comparev and writev ...passed 00:15:53.589 Test: blockdev nvme passthru rw ...passed 00:15:53.589 Test: blockdev nvme passthru vendor specific ...passed 00:15:53.589 Test: blockdev nvme admin passthru ...passed 00:15:53.589 Test: blockdev copy ...passed 00:15:53.589 Suite: bdevio tests on: nvme2n1 00:15:53.589 Test: blockdev write read block ...passed 00:15:53.589 Test: blockdev write zeroes read block ...passed 00:15:53.589 Test: blockdev write zeroes read no split ...passed 00:15:53.589 Test: blockdev write zeroes read split ...passed 00:15:53.589 Test: blockdev write zeroes read split partial ...passed 00:15:53.589 Test: blockdev reset ...passed 00:15:53.589 Test: blockdev write read 8 blocks ...passed 00:15:53.589 Test: blockdev write read size > 128k ...passed 00:15:53.589 Test: blockdev write read invalid size ...passed 00:15:53.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:53.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:53.589 Test: blockdev write read max offset ...passed 00:15:53.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.589 Test: blockdev writev readv 8 blocks ...passed 00:15:53.589 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.589 Test: blockdev writev readv block ...passed 00:15:53.589 Test: blockdev writev readv size > 128k ...passed 00:15:53.589 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.589 Test: blockdev comparev and writev ...passed 00:15:53.589 Test: blockdev nvme passthru rw ...passed 00:15:53.589 Test: blockdev nvme passthru vendor specific ...passed 00:15:53.589 Test: blockdev nvme admin passthru ...passed 00:15:53.589 Test: blockdev copy ...passed 00:15:53.589 Suite: bdevio tests on: nvme1n1 00:15:53.589 Test: blockdev write read block ...passed 00:15:53.589 Test: blockdev write zeroes read block ...passed 00:15:53.589 Test: blockdev write zeroes read no split ...passed 00:15:53.589 Test: blockdev write zeroes read split ...passed 00:15:53.848 Test: blockdev write zeroes read split partial ...passed 00:15:53.848 Test: blockdev reset ...passed 00:15:53.848 Test: blockdev write read 8 blocks ...passed 00:15:53.848 Test: blockdev write read size > 128k ...passed 00:15:53.848 Test: blockdev write read invalid size ...passed 00:15:53.848 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:53.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:53.848 Test: blockdev write read max offset ...passed 00:15:53.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.848 Test: blockdev writev readv 8 blocks ...passed 00:15:53.848 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.848 Test: blockdev writev readv block ...passed 00:15:53.848 Test: blockdev writev readv size > 128k ...passed 00:15:53.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.848 Test: blockdev comparev and writev ...passed 00:15:53.848 Test: blockdev nvme passthru rw ...passed 00:15:53.848 Test: blockdev nvme passthru vendor specific ...passed 00:15:53.848 Test: blockdev nvme admin passthru ...passed 00:15:53.848 Test: blockdev copy ...passed 00:15:53.848 Suite: bdevio tests on: nvme0n1 00:15:53.848 Test: blockdev write read block ...passed 00:15:53.848 Test: blockdev write zeroes read block ...passed 00:15:53.848 Test: blockdev write zeroes read no split ...passed 00:15:53.848 Test: blockdev write zeroes read split ...passed 00:15:53.848 Test: blockdev write zeroes read split partial ...passed 00:15:53.848 Test: blockdev reset ...passed 00:15:53.848 Test: blockdev write read 8 blocks ...passed 00:15:53.848 Test: blockdev write read size > 128k ...passed 00:15:53.848 Test: blockdev write read invalid size ...passed 00:15:53.848 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:53.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:53.848 Test: blockdev write read max offset ...passed 00:15:53.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:53.848 Test: blockdev writev readv 8 blocks ...passed 00:15:53.848 Test: blockdev writev readv 30 x 1block ...passed 00:15:53.848 Test: blockdev writev readv block ...passed 00:15:53.848 Test: blockdev writev readv size > 128k ...passed 00:15:53.848 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:53.848 Test: blockdev comparev and writev ...passed 00:15:53.848 Test: blockdev nvme passthru rw ...passed 00:15:53.848 Test: blockdev nvme passthru vendor specific ...passed 00:15:53.848 Test: blockdev nvme admin passthru ...passed 00:15:53.848 Test: blockdev copy ...passed 00:15:53.848 00:15:53.848 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.848 suites 6 6 n/a 0 0 00:15:53.848 tests 138 138 138 0 0 00:15:53.848 asserts 780 780 780 0 n/a 00:15:53.848 00:15:53.848 Elapsed time = 1.267 seconds 00:15:53.848 0 00:15:53.848 16:10:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71182 00:15:53.848 16:10:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 71182 ']' 00:15:53.848 16:10:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 71182 00:15:53.849 16:10:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:15:53.849 16:10:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:53.849 16:10:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71182 00:15:53.849 killing process with pid 71182 00:15:53.849 16:10:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:53.849 16:10:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:53.849 16:10:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71182' 00:15:53.849 16:10:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 71182 00:15:53.849 16:10:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 71182 00:15:55.226 ************************************ 00:15:55.226 END TEST bdev_bounds 00:15:55.226 ************************************ 00:15:55.226 16:10:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:55.226 00:15:55.226 real 0m2.695s 00:15:55.226 user 0m6.698s 00:15:55.226 sys 0m0.402s 00:15:55.226 16:10:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:55.226 16:10:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:55.226 16:10:13 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:55.226 16:10:13 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:55.226 16:10:13 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:55.227 16:10:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:55.227 ************************************ 00:15:55.227 START TEST bdev_nbd 00:15:55.227 ************************************ 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71241 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71241 /var/tmp/spdk-nbd.sock 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 71241 ']' 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:55.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:55.227 16:10:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:55.227 [2024-11-04 16:10:13.768543] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:15:55.227 [2024-11-04 16:10:13.769075] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.484 [2024-11-04 16:10:13.951939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.484 [2024-11-04 16:10:14.068621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:56.051 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.309 1+0 records in 00:15:56.309 1+0 records out 00:15:56.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056676 s, 7.2 MB/s 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:56.309 16:10:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:56.567 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:56.567 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:56.567 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:56.567 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:56.567 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:56.567 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:56.567 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:56.567 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:56.567 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:56.568 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:56.568 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:56.568 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.568 1+0 records in 00:15:56.568 1+0 records out 00:15:56.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593539 s, 6.9 MB/s 00:15:56.568 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.568 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:56.568 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.568 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:56.568 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:56.568 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:56.568 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:56.568 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.844 1+0 records in 00:15:56.844 1+0 records out 00:15:56.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526888 s, 7.8 MB/s 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:56.844 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.115 1+0 records in 00:15:57.115 1+0 records out 00:15:57.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526611 s, 7.8 MB/s 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:57.115 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:15:57.373 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:57.373 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:57.373 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:57.373 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:15:57.373 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:57.373 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:57.373 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:57.373 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:15:57.373 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:57.373 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:57.373 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:57.373 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.373 1+0 records in 00:15:57.373 1+0 records out 00:15:57.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743087 s, 5.5 MB/s 00:15:57.374 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.374 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:57.374 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.374 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:57.374 16:10:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:57.374 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:57.374 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:57.374 16:10:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:57.632 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:57.632 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:57.632 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:57.632 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.633 1+0 records in 00:15:57.633 1+0 records out 00:15:57.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001805 s, 2.3 MB/s 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:57.633 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:57.891 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:57.891 { 00:15:57.891 "nbd_device": "/dev/nbd0", 00:15:57.891 "bdev_name": "nvme0n1" 00:15:57.891 }, 00:15:57.891 { 00:15:57.891 "nbd_device": "/dev/nbd1", 00:15:57.891 "bdev_name": "nvme1n1" 00:15:57.891 }, 00:15:57.891 { 00:15:57.891 "nbd_device": "/dev/nbd2", 00:15:57.891 "bdev_name": "nvme2n1" 00:15:57.891 }, 00:15:57.891 { 00:15:57.891 "nbd_device": "/dev/nbd3", 00:15:57.891 "bdev_name": "nvme2n2" 00:15:57.891 }, 00:15:57.891 { 00:15:57.891 "nbd_device": "/dev/nbd4", 00:15:57.891 "bdev_name": "nvme2n3" 00:15:57.891 }, 00:15:57.891 { 00:15:57.891 "nbd_device": "/dev/nbd5", 00:15:57.891 "bdev_name": "nvme3n1" 00:15:57.891 } 00:15:57.891 ]' 00:15:57.891 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:57.891 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:57.891 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:57.891 { 00:15:57.891 "nbd_device": "/dev/nbd0", 00:15:57.891 "bdev_name": "nvme0n1" 00:15:57.891 }, 00:15:57.891 { 00:15:57.891 "nbd_device": "/dev/nbd1", 00:15:57.891 "bdev_name": "nvme1n1" 00:15:57.891 }, 00:15:57.891 { 00:15:57.891 "nbd_device": "/dev/nbd2", 00:15:57.891 "bdev_name": "nvme2n1" 00:15:57.891 }, 00:15:57.891 { 00:15:57.891 "nbd_device": "/dev/nbd3", 00:15:57.891 "bdev_name": "nvme2n2" 00:15:57.891 }, 00:15:57.891 { 00:15:57.891 "nbd_device": "/dev/nbd4", 00:15:57.891 "bdev_name": "nvme2n3" 00:15:57.891 }, 00:15:57.891 { 00:15:57.891 "nbd_device": "/dev/nbd5", 00:15:57.891 "bdev_name": "nvme3n1" 00:15:57.891 } 00:15:57.891 ]' 00:15:57.891 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:57.891 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.891 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:57.891 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:57.891 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:57.891 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.891 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:58.150 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:58.150 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:58.150 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:58.150 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.150 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.150 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:58.150 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.150 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.150 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.150 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:58.409 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:58.409 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:58.409 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:58.409 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.409 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.409 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:58.409 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.409 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.409 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.409 16:10:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.667 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:58.926 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:58.926 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:58.926 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:58.926 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.926 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.926 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:58.926 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.926 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.926 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.926 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:59.184 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:59.184 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:59.184 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:59.184 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.184 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.184 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:59.184 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.184 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.184 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:59.184 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.184 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:59.443 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:59.443 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:59.443 16:10:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:59.443 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:59.701 /dev/nbd0 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.701 1+0 records in 00:15:59.701 1+0 records out 00:15:59.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508438 s, 8.1 MB/s 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:59.701 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:59.702 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:59.702 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:59.961 /dev/nbd1 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.961 1+0 records in 00:15:59.961 1+0 records out 00:15:59.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605407 s, 6.8 MB/s 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:59.961 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:16:00.220 /dev/nbd10 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.220 1+0 records in 00:16:00.220 1+0 records out 00:16:00.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775898 s, 5.3 MB/s 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.220 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:16:00.478 /dev/nbd11 00:16:00.478 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:00.478 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:00.478 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:16:00.478 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:16:00.478 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:00.478 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:00.478 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:16:00.478 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:16:00.479 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:00.479 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:00.479 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.479 1+0 records in 00:16:00.479 1+0 records out 00:16:00.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551731 s, 7.4 MB/s 00:16:00.479 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.479 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:16:00.479 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.479 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:00.479 16:10:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:16:00.479 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.479 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.479 16:10:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:16:00.479 /dev/nbd12 00:16:00.479 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.737 1+0 records in 00:16:00.737 1+0 records out 00:16:00.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137478 s, 3.0 MB/s 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:00.737 /dev/nbd13 00:16:00.737 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:00.738 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:00.738 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:16:00.738 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:16:00.738 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:00.738 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:00.738 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:16:00.738 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:16:00.738 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:00.738 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:00.738 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.996 1+0 records in 00:16:00.996 1+0 records out 00:16:00.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000772365 s, 5.3 MB/s 00:16:00.996 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.996 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:16:00.996 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.996 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:00.996 16:10:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:16:00.996 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.996 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.996 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:00.996 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.996 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:00.996 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:00.996 { 00:16:00.996 "nbd_device": "/dev/nbd0", 00:16:00.996 "bdev_name": "nvme0n1" 00:16:00.996 }, 00:16:00.996 { 00:16:00.996 "nbd_device": "/dev/nbd1", 00:16:00.996 "bdev_name": "nvme1n1" 00:16:00.996 }, 00:16:00.996 { 00:16:00.996 "nbd_device": "/dev/nbd10", 00:16:00.996 "bdev_name": "nvme2n1" 00:16:00.996 }, 00:16:00.996 { 00:16:00.996 "nbd_device": "/dev/nbd11", 00:16:00.996 "bdev_name": "nvme2n2" 00:16:00.996 }, 00:16:00.996 { 00:16:00.996 "nbd_device": "/dev/nbd12", 00:16:00.996 "bdev_name": "nvme2n3" 00:16:00.996 }, 00:16:00.996 { 00:16:00.996 "nbd_device": "/dev/nbd13", 00:16:00.996 "bdev_name": "nvme3n1" 00:16:00.996 } 00:16:00.996 ]' 00:16:00.996 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:00.996 { 00:16:00.996 "nbd_device": "/dev/nbd0", 00:16:00.996 "bdev_name": "nvme0n1" 00:16:00.996 }, 00:16:00.996 { 00:16:00.996 "nbd_device": "/dev/nbd1", 00:16:00.996 "bdev_name": "nvme1n1" 00:16:00.996 }, 00:16:00.996 { 00:16:00.996 "nbd_device": "/dev/nbd10", 00:16:00.996 "bdev_name": "nvme2n1" 00:16:00.996 }, 00:16:00.996 { 00:16:00.996 "nbd_device": "/dev/nbd11", 00:16:00.996 "bdev_name": "nvme2n2" 00:16:00.996 }, 00:16:00.996 { 00:16:00.997 "nbd_device": "/dev/nbd12", 00:16:00.997 "bdev_name": "nvme2n3" 00:16:00.997 }, 00:16:00.997 { 00:16:00.997 "nbd_device": "/dev/nbd13", 00:16:00.997 "bdev_name": "nvme3n1" 00:16:00.997 } 00:16:00.997 ]' 00:16:00.997 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:01.255 /dev/nbd1 00:16:01.255 /dev/nbd10 00:16:01.255 /dev/nbd11 00:16:01.255 /dev/nbd12 00:16:01.255 /dev/nbd13' 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:01.255 /dev/nbd1 00:16:01.255 /dev/nbd10 00:16:01.255 /dev/nbd11 00:16:01.255 /dev/nbd12 00:16:01.255 /dev/nbd13' 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:01.255 256+0 records in 00:16:01.255 256+0 records out 00:16:01.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011787 s, 89.0 MB/s 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:01.255 256+0 records in 00:16:01.255 256+0 records out 00:16:01.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121663 s, 8.6 MB/s 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.255 16:10:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:01.514 256+0 records in 00:16:01.514 256+0 records out 00:16:01.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152336 s, 6.9 MB/s 00:16:01.514 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.514 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:01.514 256+0 records in 00:16:01.514 256+0 records out 00:16:01.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122784 s, 8.5 MB/s 00:16:01.514 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.514 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:01.772 256+0 records in 00:16:01.772 256+0 records out 00:16:01.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121927 s, 8.6 MB/s 00:16:01.772 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.772 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:01.772 256+0 records in 00:16:01.772 256+0 records out 00:16:01.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121673 s, 8.6 MB/s 00:16:01.772 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.772 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:02.029 256+0 records in 00:16:02.029 256+0 records out 00:16:02.029 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121501 s, 8.6 MB/s 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:02.029 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.030 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:02.030 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.030 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:02.288 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:02.288 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:02.288 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:02.288 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.288 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.288 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:02.288 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.288 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.288 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.288 16:10:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.556 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:02.818 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:02.818 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:02.818 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:02.818 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.818 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.818 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:02.818 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.818 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.818 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.818 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:03.076 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:03.076 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:03.076 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:03.076 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.076 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.076 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:03.076 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.076 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.076 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.076 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:03.334 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:03.334 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:03.334 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:03.334 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.334 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.334 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:03.334 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.334 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.334 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:03.334 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.334 16:10:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:03.597 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:03.856 malloc_lvol_verify 00:16:03.856 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:04.115 f2da2bf1-778a-48b3-a99a-56c2ac5d2231 00:16:04.115 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:04.115 8186c8ce-d24b-4714-8c93-7369c3c57fd2 00:16:04.115 16:10:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:04.374 /dev/nbd0 00:16:04.374 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:04.374 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:04.374 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:04.374 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:04.374 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:04.374 mke2fs 1.47.0 (5-Feb-2023) 00:16:04.374 Discarding device blocks: 0/4096 done 00:16:04.374 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:04.374 00:16:04.374 Allocating group tables: 0/1 done 00:16:04.374 Writing inode tables: 0/1 done 00:16:04.374 Creating journal (1024 blocks): done 00:16:04.374 Writing superblocks and filesystem accounting information: 0/1 done 00:16:04.374 00:16:04.374 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:04.374 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.374 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:04.374 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.374 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:04.374 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.374 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71241 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 71241 ']' 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 71241 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71241 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:04.633 killing process with pid 71241 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71241' 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 71241 00:16:04.633 16:10:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 71241 00:16:06.017 16:10:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:06.017 00:16:06.017 real 0m10.811s 00:16:06.017 user 0m13.969s 00:16:06.017 sys 0m4.554s 00:16:06.017 16:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:06.017 16:10:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:06.018 ************************************ 00:16:06.018 END TEST bdev_nbd 00:16:06.018 ************************************ 00:16:06.018 16:10:24 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:06.018 16:10:24 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:16:06.018 16:10:24 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:16:06.018 16:10:24 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:06.018 16:10:24 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:06.018 16:10:24 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:06.018 16:10:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.018 ************************************ 00:16:06.018 START TEST bdev_fio 00:16:06.018 ************************************ 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:06.018 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:06.018 ************************************ 00:16:06.018 START TEST bdev_fio_rw_verify 00:16:06.018 ************************************ 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:06.018 16:10:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:06.277 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:06.277 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:06.277 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:06.277 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:06.277 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:06.277 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:06.277 fio-3.35 00:16:06.277 Starting 6 threads 00:16:18.485 00:16:18.485 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=71656: Mon Nov 4 16:10:35 2024 00:16:18.485 read: IOPS=32.9k, BW=128MiB/s (135MB/s)(1285MiB/10001msec) 00:16:18.485 slat (usec): min=2, max=398, avg= 6.04, stdev= 3.28 00:16:18.485 clat (usec): min=101, max=5240, avg=586.54, stdev=168.64 00:16:18.485 lat (usec): min=103, max=5245, avg=592.58, stdev=169.31 00:16:18.485 clat percentiles (usec): 00:16:18.485 | 50.000th=[ 627], 99.000th=[ 947], 99.900th=[ 1319], 99.990th=[ 3818], 00:16:18.485 | 99.999th=[ 5211] 00:16:18.485 write: IOPS=33.1k, BW=129MiB/s (136MB/s)(1294MiB/10001msec); 0 zone resets 00:16:18.485 slat (usec): min=6, max=1813, avg=20.03, stdev=22.48 00:16:18.485 clat (usec): min=68, max=5303, avg=660.91, stdev=188.54 00:16:18.485 lat (usec): min=82, max=5318, avg=680.93, stdev=191.09 00:16:18.485 clat percentiles (usec): 00:16:18.485 | 50.000th=[ 676], 99.000th=[ 1254], 99.900th=[ 1926], 99.990th=[ 4178], 00:16:18.485 | 99.999th=[ 5276] 00:16:18.485 bw ( KiB/s): min=109640, max=148905, per=100.00%, avg=133146.05, stdev=1978.78, samples=114 00:16:18.485 iops : min=27410, max=37226, avg=33286.37, stdev=494.69, samples=114 00:16:18.485 lat (usec) : 100=0.01%, 250=3.42%, 500=15.93%, 750=66.21%, 1000=12.52% 00:16:18.485 lat (msec) : 2=1.87%, 4=0.05%, 10=0.01% 00:16:18.485 cpu : usr=60.43%, sys=28.43%, ctx=7420, majf=0, minf=27304 00:16:18.485 IO depths : 1=11.9%, 2=24.4%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.485 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.485 issued rwts: total=328860,331348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.485 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:18.485 00:16:18.485 Run status group 0 (all jobs): 00:16:18.485 READ: bw=128MiB/s (135MB/s), 128MiB/s-128MiB/s (135MB/s-135MB/s), io=1285MiB (1347MB), run=10001-10001msec 00:16:18.485 WRITE: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=1294MiB (1357MB), run=10001-10001msec 00:16:18.485 ----------------------------------------------------- 00:16:18.485 Suppressions used: 00:16:18.485 count bytes template 00:16:18.485 6 48 /usr/src/fio/parse.c 00:16:18.485 2245 215520 /usr/src/fio/iolog.c 00:16:18.485 1 8 libtcmalloc_minimal.so 00:16:18.485 1 904 libcrypto.so 00:16:18.485 ----------------------------------------------------- 00:16:18.485 00:16:18.485 00:16:18.485 real 0m12.474s 00:16:18.485 user 0m38.213s 00:16:18.485 sys 0m17.486s 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:18.485 ************************************ 00:16:18.485 END TEST bdev_fio_rw_verify 00:16:18.485 ************************************ 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:18.485 16:10:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "02d8ddcb-b386-4cd9-a3c4-252a158206b1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "02d8ddcb-b386-4cd9-a3c4-252a158206b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "feacf189-a430-4f94-841f-9c6f7840cd37"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "feacf189-a430-4f94-841f-9c6f7840cd37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "9dbf3aab-a6aa-4fb7-aa63-392f5e9c208b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9dbf3aab-a6aa-4fb7-aa63-392f5e9c208b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "8269cdeb-396b-43f4-a340-40caca412bc7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8269cdeb-396b-43f4-a340-40caca412bc7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "307b84d7-1d2b-4074-aa3e-ebbc63794309"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "307b84d7-1d2b-4074-aa3e-ebbc63794309",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1a27fb70-2551-4492-ac5c-5853d0440ab4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1a27fb70-2551-4492-ac5c-5853d0440ab4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:18.743 16:10:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:18.743 16:10:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:18.743 16:10:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:18.743 /home/vagrant/spdk_repo/spdk 00:16:18.744 16:10:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:18.744 16:10:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:18.744 00:16:18.744 real 0m12.700s 00:16:18.744 user 0m38.329s 00:16:18.744 sys 0m17.602s 00:16:18.744 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:18.744 16:10:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:18.744 ************************************ 00:16:18.744 END TEST bdev_fio 00:16:18.744 ************************************ 00:16:18.744 16:10:37 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:18.744 16:10:37 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:18.744 16:10:37 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:16:18.744 16:10:37 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:18.744 16:10:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:18.744 ************************************ 00:16:18.744 START TEST bdev_verify 00:16:18.744 ************************************ 00:16:18.744 16:10:37 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:18.744 [2024-11-04 16:10:37.411186] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:16:18.744 [2024-11-04 16:10:37.411895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71825 ] 00:16:19.002 [2024-11-04 16:10:37.594288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:19.002 [2024-11-04 16:10:37.709988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.002 [2024-11-04 16:10:37.710017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.568 Running I/O for 5 seconds... 00:16:21.898 25024.00 IOPS, 97.75 MiB/s [2024-11-04T16:10:41.554Z] 24672.00 IOPS, 96.38 MiB/s [2024-11-04T16:10:42.490Z] 24896.00 IOPS, 97.25 MiB/s [2024-11-04T16:10:43.425Z] 24504.00 IOPS, 95.72 MiB/s [2024-11-04T16:10:43.425Z] 24230.40 IOPS, 94.65 MiB/s 00:16:24.703 Latency(us) 00:16:24.703 [2024-11-04T16:10:43.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.703 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:24.703 Verification LBA range: start 0x0 length 0xa0000 00:16:24.703 nvme0n1 : 5.07 1793.08 7.00 0.00 0.00 71266.20 13159.84 64851.69 00:16:24.703 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:24.703 Verification LBA range: start 0xa0000 length 0xa0000 00:16:24.703 nvme0n1 : 5.05 1876.57 7.33 0.00 0.00 68091.50 9685.64 59377.20 00:16:24.703 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:24.703 Verification LBA range: start 0x0 length 0xbd0bd 00:16:24.703 nvme1n1 : 5.06 2716.98 10.61 0.00 0.00 46938.16 5474.49 54323.82 00:16:24.703 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:24.703 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:24.703 nvme1n1 : 5.05 2814.15 10.99 0.00 0.00 45290.15 6185.12 53692.14 00:16:24.703 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:24.703 Verification LBA range: start 0x0 length 0x80000 00:16:24.703 nvme2n1 : 5.05 1798.00 7.02 0.00 0.00 70832.20 10106.76 64009.46 00:16:24.703 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:24.703 Verification LBA range: start 0x80000 length 0x80000 00:16:24.703 nvme2n1 : 5.04 1904.65 7.44 0.00 0.00 66859.04 8843.41 61903.88 00:16:24.703 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:24.703 Verification LBA range: start 0x0 length 0x80000 00:16:24.703 nvme2n2 : 5.08 1814.63 7.09 0.00 0.00 70003.51 8106.46 59377.20 00:16:24.703 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:24.703 Verification LBA range: start 0x80000 length 0x80000 00:16:24.703 nvme2n2 : 5.05 1874.11 7.32 0.00 0.00 67750.23 12107.05 57271.62 00:16:24.703 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:24.703 Verification LBA range: start 0x0 length 0x80000 00:16:24.703 nvme2n3 : 5.07 1791.32 7.00 0.00 0.00 70782.53 13791.51 57692.74 00:16:24.703 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:24.703 Verification LBA range: start 0x80000 length 0x80000 00:16:24.703 nvme2n3 : 5.06 1873.13 7.32 0.00 0.00 67675.76 11475.38 55166.05 00:16:24.703 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:24.703 Verification LBA range: start 0x0 length 0x20000 00:16:24.703 nvme3n1 : 5.08 1790.69 6.99 0.00 0.00 70702.66 7264.23 65693.92 00:16:24.703 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:24.703 Verification LBA range: start 0x20000 length 0x20000 00:16:24.703 nvme3n1 : 5.07 1893.97 7.40 0.00 0.00 66829.09 1190.97 60640.54 00:16:24.703 [2024-11-04T16:10:43.425Z] =================================================================================================================== 00:16:24.703 [2024-11-04T16:10:43.425Z] Total : 23941.28 93.52 0.00 0.00 63745.01 1190.97 65693.92 00:16:26.088 00:16:26.088 real 0m7.120s 00:16:26.088 user 0m10.895s 00:16:26.088 sys 0m2.050s 00:16:26.088 16:10:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:26.088 16:10:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:26.088 ************************************ 00:16:26.088 END TEST bdev_verify 00:16:26.088 ************************************ 00:16:26.088 16:10:44 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:26.088 16:10:44 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:16:26.088 16:10:44 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:26.088 16:10:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:26.088 ************************************ 00:16:26.088 START TEST bdev_verify_big_io 00:16:26.088 ************************************ 00:16:26.088 16:10:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:26.088 [2024-11-04 16:10:44.602270] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:16:26.088 [2024-11-04 16:10:44.602411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71930 ] 00:16:26.088 [2024-11-04 16:10:44.782198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:26.373 [2024-11-04 16:10:44.898315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.373 [2024-11-04 16:10:44.898346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.938 Running I/O for 5 seconds... 00:16:32.009 1360.00 IOPS, 85.00 MiB/s [2024-11-04T16:10:51.298Z] 2927.00 IOPS, 182.94 MiB/s [2024-11-04T16:10:51.557Z] 3736.00 IOPS, 233.50 MiB/s 00:16:32.835 Latency(us) 00:16:32.835 [2024-11-04T16:10:51.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.835 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:32.835 Verification LBA range: start 0x0 length 0xa000 00:16:32.835 nvme0n1 : 5.59 157.35 9.83 0.00 0.00 788087.45 22108.53 902870.26 00:16:32.835 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:32.835 Verification LBA range: start 0xa000 length 0xa000 00:16:32.835 nvme0n1 : 5.77 138.76 8.67 0.00 0.00 893704.69 103173.14 970248.64 00:16:32.835 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:32.835 Verification LBA range: start 0x0 length 0xbd0b 00:16:32.835 nvme1n1 : 5.47 174.99 10.94 0.00 0.00 698508.87 38532.01 1172383.77 00:16:32.835 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:32.835 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:32.835 nvme1n1 : 5.56 174.54 10.91 0.00 0.00 699145.09 7316.87 744531.07 00:16:32.835 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:32.835 Verification LBA range: start 0x0 length 0x8000 00:16:32.835 nvme2n1 : 5.59 171.59 10.72 0.00 0.00 683667.69 54744.93 559240.53 00:16:32.835 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:32.835 Verification LBA range: start 0x8000 length 0x8000 00:16:32.835 nvme2n1 : 5.76 136.18 8.51 0.00 0.00 871305.44 126334.46 1098267.55 00:16:32.835 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:32.835 Verification LBA range: start 0x0 length 0x8000 00:16:32.835 nvme2n2 : 5.78 130.21 8.14 0.00 0.00 886930.61 88855.24 1435159.44 00:16:32.835 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:32.835 Verification LBA range: start 0x8000 length 0x8000 00:16:32.835 nvme2n2 : 5.77 177.46 11.09 0.00 0.00 657678.80 43795.95 950035.12 00:16:32.835 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:32.835 Verification LBA range: start 0x0 length 0x8000 00:16:32.835 nvme2n3 : 5.78 163.52 10.22 0.00 0.00 687089.14 12159.69 1300402.69 00:16:32.835 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:32.835 Verification LBA range: start 0x8000 length 0x8000 00:16:32.835 nvme2n3 : 5.77 169.01 10.56 0.00 0.00 672660.01 70326.18 1078054.04 00:16:32.835 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:32.835 Verification LBA range: start 0x0 length 0x2000 00:16:32.835 nvme3n1 : 5.79 204.52 12.78 0.00 0.00 539617.88 1651.56 889394.58 00:16:32.835 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:32.835 Verification LBA range: start 0x2000 length 0x2000 00:16:32.835 nvme3n1 : 5.78 193.61 12.10 0.00 0.00 574937.78 12475.53 1071316.20 00:16:32.835 [2024-11-04T16:10:51.557Z] =================================================================================================================== 00:16:32.835 [2024-11-04T16:10:51.557Z] Total : 1991.74 124.48 0.00 0.00 706803.83 1651.56 1435159.44 00:16:34.212 00:16:34.212 real 0m8.196s 00:16:34.212 user 0m14.780s 00:16:34.212 sys 0m0.652s 00:16:34.212 16:10:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:34.212 16:10:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.212 ************************************ 00:16:34.212 END TEST bdev_verify_big_io 00:16:34.212 ************************************ 00:16:34.212 16:10:52 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:34.212 16:10:52 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:34.212 16:10:52 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:34.212 16:10:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.212 ************************************ 00:16:34.212 START TEST bdev_write_zeroes 00:16:34.212 ************************************ 00:16:34.212 16:10:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:34.212 [2024-11-04 16:10:52.873035] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:16:34.212 [2024-11-04 16:10:52.873162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72041 ] 00:16:34.471 [2024-11-04 16:10:53.055011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.471 [2024-11-04 16:10:53.166128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.039 Running I/O for 1 seconds... 00:16:35.976 73536.00 IOPS, 287.25 MiB/s 00:16:35.976 Latency(us) 00:16:35.976 [2024-11-04T16:10:54.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.976 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:35.976 nvme0n1 : 1.03 11849.13 46.29 0.00 0.00 10791.55 7685.35 38532.01 00:16:35.976 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:35.976 nvme1n1 : 1.03 13138.90 51.32 0.00 0.00 9725.82 4605.94 32846.96 00:16:35.976 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:35.976 nvme2n1 : 1.03 11813.73 46.15 0.00 0.00 10763.84 5842.97 32636.40 00:16:35.976 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:35.976 nvme2n2 : 1.03 11796.67 46.08 0.00 0.00 10764.92 5158.66 32425.84 00:16:35.976 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:35.976 nvme2n3 : 1.03 11779.62 46.01 0.00 0.00 10774.73 5237.62 32215.29 00:16:35.976 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:35.976 nvme3n1 : 1.03 11765.84 45.96 0.00 0.00 10779.82 5237.62 31794.17 00:16:35.976 [2024-11-04T16:10:54.698Z] =================================================================================================================== 00:16:35.976 [2024-11-04T16:10:54.698Z] Total : 72143.89 281.81 0.00 0.00 10584.30 4605.94 38532.01 00:16:37.352 00:16:37.352 real 0m3.020s 00:16:37.352 user 0m2.196s 00:16:37.352 sys 0m0.656s 00:16:37.352 16:10:55 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:37.352 16:10:55 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:37.352 ************************************ 00:16:37.352 END TEST bdev_write_zeroes 00:16:37.352 ************************************ 00:16:37.352 16:10:55 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.352 16:10:55 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:37.352 16:10:55 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:37.352 16:10:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:37.352 ************************************ 00:16:37.352 START TEST bdev_json_nonenclosed 00:16:37.352 ************************************ 00:16:37.353 16:10:55 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.353 [2024-11-04 16:10:55.960919] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:16:37.353 [2024-11-04 16:10:55.961036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72100 ] 00:16:37.611 [2024-11-04 16:10:56.129423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.611 [2024-11-04 16:10:56.237856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.611 [2024-11-04 16:10:56.237945] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:37.611 [2024-11-04 16:10:56.237967] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:37.611 [2024-11-04 16:10:56.237978] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:37.869 00:16:37.869 real 0m0.619s 00:16:37.869 user 0m0.384s 00:16:37.869 sys 0m0.131s 00:16:37.869 16:10:56 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:37.869 16:10:56 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:37.869 ************************************ 00:16:37.869 END TEST bdev_json_nonenclosed 00:16:37.869 ************************************ 00:16:37.869 16:10:56 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.869 16:10:56 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:37.869 16:10:56 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:37.869 16:10:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:37.869 ************************************ 00:16:37.869 START TEST bdev_json_nonarray 00:16:37.869 ************************************ 00:16:37.869 16:10:56 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:38.128 [2024-11-04 16:10:56.656895] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:16:38.128 [2024-11-04 16:10:56.657023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72126 ] 00:16:38.128 [2024-11-04 16:10:56.839945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.386 [2024-11-04 16:10:56.949701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.386 [2024-11-04 16:10:56.950002] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:38.386 [2024-11-04 16:10:56.950033] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:38.386 [2024-11-04 16:10:56.950046] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:38.645 00:16:38.645 real 0m0.639s 00:16:38.645 user 0m0.392s 00:16:38.645 sys 0m0.143s 00:16:38.645 16:10:57 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:38.645 16:10:57 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:38.645 ************************************ 00:16:38.645 END TEST bdev_json_nonarray 00:16:38.645 ************************************ 00:16:38.645 16:10:57 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:38.645 16:10:57 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:38.645 16:10:57 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:38.645 16:10:57 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:38.645 16:10:57 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:38.645 16:10:57 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:38.645 16:10:57 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:38.645 16:10:57 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:38.645 16:10:57 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:38.645 16:10:57 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:38.645 16:10:57 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:38.645 16:10:57 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:39.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:47.765 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:48.023 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:48.023 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:48.023 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:48.023 00:16:48.023 real 1m8.279s 00:16:48.023 user 1m40.797s 00:16:48.023 sys 0m45.279s 00:16:48.023 16:11:06 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:48.023 ************************************ 00:16:48.023 END TEST blockdev_xnvme 00:16:48.023 ************************************ 00:16:48.023 16:11:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:48.023 16:11:06 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:48.023 16:11:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:48.023 16:11:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:48.023 16:11:06 -- common/autotest_common.sh@10 -- # set +x 00:16:48.023 ************************************ 00:16:48.023 START TEST ublk 00:16:48.023 ************************************ 00:16:48.023 16:11:06 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:48.281 * Looking for test storage... 00:16:48.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:48.281 16:11:06 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:48.281 16:11:06 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:48.281 16:11:06 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:16:48.281 16:11:06 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:48.281 16:11:06 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.281 16:11:06 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.281 16:11:06 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.281 16:11:06 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.281 16:11:06 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.281 16:11:06 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.281 16:11:06 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.281 16:11:06 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.281 16:11:06 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.281 16:11:06 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.281 16:11:06 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.281 16:11:06 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:48.281 16:11:06 ublk -- scripts/common.sh@345 -- # : 1 00:16:48.281 16:11:06 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.281 16:11:06 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.281 16:11:06 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:48.281 16:11:06 ublk -- scripts/common.sh@353 -- # local d=1 00:16:48.281 16:11:06 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.281 16:11:06 ublk -- scripts/common.sh@355 -- # echo 1 00:16:48.281 16:11:06 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.281 16:11:06 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:48.281 16:11:06 ublk -- scripts/common.sh@353 -- # local d=2 00:16:48.281 16:11:06 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.281 16:11:06 ublk -- scripts/common.sh@355 -- # echo 2 00:16:48.281 16:11:06 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.281 16:11:06 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.281 16:11:06 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.281 16:11:06 ublk -- scripts/common.sh@368 -- # return 0 00:16:48.281 16:11:06 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.281 16:11:06 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:48.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.281 --rc genhtml_branch_coverage=1 00:16:48.281 --rc genhtml_function_coverage=1 00:16:48.281 --rc genhtml_legend=1 00:16:48.281 --rc geninfo_all_blocks=1 00:16:48.281 --rc geninfo_unexecuted_blocks=1 00:16:48.281 00:16:48.281 ' 00:16:48.281 16:11:06 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:48.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.281 --rc genhtml_branch_coverage=1 00:16:48.281 --rc genhtml_function_coverage=1 00:16:48.281 --rc genhtml_legend=1 00:16:48.281 --rc geninfo_all_blocks=1 00:16:48.281 --rc geninfo_unexecuted_blocks=1 00:16:48.281 00:16:48.281 ' 00:16:48.281 16:11:06 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:48.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.281 --rc genhtml_branch_coverage=1 00:16:48.281 --rc genhtml_function_coverage=1 00:16:48.281 --rc genhtml_legend=1 00:16:48.281 --rc geninfo_all_blocks=1 00:16:48.281 --rc geninfo_unexecuted_blocks=1 00:16:48.281 00:16:48.281 ' 00:16:48.281 16:11:06 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:48.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.281 --rc genhtml_branch_coverage=1 00:16:48.281 --rc genhtml_function_coverage=1 00:16:48.281 --rc genhtml_legend=1 00:16:48.281 --rc geninfo_all_blocks=1 00:16:48.281 --rc geninfo_unexecuted_blocks=1 00:16:48.281 00:16:48.281 ' 00:16:48.281 16:11:06 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:48.281 16:11:06 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:48.281 16:11:06 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:48.281 16:11:06 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:48.281 16:11:06 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:48.281 16:11:06 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:48.282 16:11:06 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:48.282 16:11:06 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:48.282 16:11:06 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:48.282 16:11:06 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:48.282 16:11:06 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:48.282 16:11:06 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:48.282 16:11:06 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:48.282 16:11:06 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:48.282 16:11:06 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:48.282 16:11:06 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:48.282 16:11:06 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:48.282 16:11:06 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:48.282 16:11:06 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:48.282 16:11:06 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:48.282 16:11:06 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:48.282 16:11:06 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:48.282 16:11:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.282 ************************************ 00:16:48.282 START TEST test_save_ublk_config 00:16:48.282 ************************************ 00:16:48.282 16:11:06 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:16:48.282 16:11:06 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:48.282 16:11:06 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:48.282 16:11:06 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72429 00:16:48.282 16:11:06 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:48.282 16:11:06 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72429 00:16:48.282 16:11:06 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72429 ']' 00:16:48.282 16:11:06 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.282 16:11:06 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:48.282 16:11:06 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.282 16:11:06 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:48.282 16:11:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:48.540 [2024-11-04 16:11:07.067302] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:16:48.540 [2024-11-04 16:11:07.067598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72429 ] 00:16:48.540 [2024-11-04 16:11:07.229262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.798 [2024-11-04 16:11:07.331847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.734 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:49.734 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:16:49.734 16:11:08 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:49.734 16:11:08 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:49.734 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.734 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:49.734 [2024-11-04 16:11:08.123783] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:49.734 [2024-11-04 16:11:08.124847] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:49.734 malloc0 00:16:49.734 [2024-11-04 16:11:08.203914] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:49.734 [2024-11-04 16:11:08.204005] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:49.734 [2024-11-04 16:11:08.204019] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:49.734 [2024-11-04 16:11:08.204027] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:49.734 [2024-11-04 16:11:08.212866] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:49.734 [2024-11-04 16:11:08.212892] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:49.734 [2024-11-04 16:11:08.219784] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:49.734 [2024-11-04 16:11:08.219885] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:49.734 [2024-11-04 16:11:08.236775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:49.734 0 00:16:49.734 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.734 16:11:08 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:49.734 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.734 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:49.992 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.992 16:11:08 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:49.992 "subsystems": [ 00:16:49.992 { 00:16:49.992 "subsystem": "fsdev", 00:16:49.992 "config": [ 00:16:49.992 { 00:16:49.992 "method": "fsdev_set_opts", 00:16:49.992 "params": { 00:16:49.992 "fsdev_io_pool_size": 65535, 00:16:49.992 "fsdev_io_cache_size": 256 00:16:49.992 } 00:16:49.992 } 00:16:49.992 ] 00:16:49.992 }, 00:16:49.992 { 00:16:49.992 "subsystem": "keyring", 00:16:49.992 "config": [] 00:16:49.992 }, 00:16:49.992 { 00:16:49.992 "subsystem": "iobuf", 00:16:49.992 "config": [ 00:16:49.992 { 00:16:49.992 "method": "iobuf_set_options", 00:16:49.992 "params": { 00:16:49.992 "small_pool_count": 8192, 00:16:49.992 "large_pool_count": 1024, 00:16:49.992 "small_bufsize": 8192, 00:16:49.992 "large_bufsize": 135168, 00:16:49.992 "enable_numa": false 00:16:49.992 } 00:16:49.992 } 00:16:49.992 ] 00:16:49.992 }, 00:16:49.992 { 00:16:49.992 "subsystem": "sock", 00:16:49.992 "config": [ 00:16:49.992 { 00:16:49.992 "method": "sock_set_default_impl", 00:16:49.992 "params": { 00:16:49.992 "impl_name": "posix" 00:16:49.992 } 00:16:49.992 }, 00:16:49.992 { 00:16:49.992 "method": "sock_impl_set_options", 00:16:49.992 "params": { 00:16:49.992 "impl_name": "ssl", 00:16:49.992 "recv_buf_size": 4096, 00:16:49.992 "send_buf_size": 4096, 00:16:49.992 "enable_recv_pipe": true, 00:16:49.992 "enable_quickack": false, 00:16:49.992 "enable_placement_id": 0, 00:16:49.992 "enable_zerocopy_send_server": true, 00:16:49.992 "enable_zerocopy_send_client": false, 00:16:49.992 "zerocopy_threshold": 0, 00:16:49.992 "tls_version": 0, 00:16:49.992 "enable_ktls": false 00:16:49.992 } 00:16:49.992 }, 00:16:49.992 { 00:16:49.992 "method": "sock_impl_set_options", 00:16:49.992 "params": { 00:16:49.992 "impl_name": "posix", 00:16:49.992 "recv_buf_size": 2097152, 00:16:49.992 "send_buf_size": 2097152, 00:16:49.992 "enable_recv_pipe": true, 00:16:49.992 "enable_quickack": false, 00:16:49.992 "enable_placement_id": 0, 00:16:49.992 "enable_zerocopy_send_server": true, 00:16:49.992 "enable_zerocopy_send_client": false, 00:16:49.992 "zerocopy_threshold": 0, 00:16:49.992 "tls_version": 0, 00:16:49.992 "enable_ktls": false 00:16:49.992 } 00:16:49.992 } 00:16:49.992 ] 00:16:49.992 }, 00:16:49.992 { 00:16:49.992 "subsystem": "vmd", 00:16:49.992 "config": [] 00:16:49.992 }, 00:16:49.992 { 00:16:49.992 "subsystem": "accel", 00:16:49.992 "config": [ 00:16:49.992 { 00:16:49.992 "method": "accel_set_options", 00:16:49.992 "params": { 00:16:49.992 "small_cache_size": 128, 00:16:49.992 "large_cache_size": 16, 00:16:49.992 "task_count": 2048, 00:16:49.992 "sequence_count": 2048, 00:16:49.992 "buf_count": 2048 00:16:49.992 } 00:16:49.992 } 00:16:49.992 ] 00:16:49.992 }, 00:16:49.992 { 00:16:49.992 "subsystem": "bdev", 00:16:49.992 "config": [ 00:16:49.992 { 00:16:49.992 "method": "bdev_set_options", 00:16:49.992 "params": { 00:16:49.992 "bdev_io_pool_size": 65535, 00:16:49.992 "bdev_io_cache_size": 256, 00:16:49.992 "bdev_auto_examine": true, 00:16:49.992 "iobuf_small_cache_size": 128, 00:16:49.992 "iobuf_large_cache_size": 16 00:16:49.992 } 00:16:49.992 }, 00:16:49.992 { 00:16:49.992 "method": "bdev_raid_set_options", 00:16:49.992 "params": { 00:16:49.992 "process_window_size_kb": 1024, 00:16:49.992 "process_max_bandwidth_mb_sec": 0 00:16:49.992 } 00:16:49.992 }, 00:16:49.992 { 00:16:49.992 "method": "bdev_iscsi_set_options", 00:16:49.992 "params": { 00:16:49.992 "timeout_sec": 30 00:16:49.992 } 00:16:49.992 }, 00:16:49.993 { 00:16:49.993 "method": "bdev_nvme_set_options", 00:16:49.993 "params": { 00:16:49.993 "action_on_timeout": "none", 00:16:49.993 "timeout_us": 0, 00:16:49.993 "timeout_admin_us": 0, 00:16:49.993 "keep_alive_timeout_ms": 10000, 00:16:49.993 "arbitration_burst": 0, 00:16:49.993 "low_priority_weight": 0, 00:16:49.993 "medium_priority_weight": 0, 00:16:49.993 "high_priority_weight": 0, 00:16:49.993 "nvme_adminq_poll_period_us": 10000, 00:16:49.993 "nvme_ioq_poll_period_us": 0, 00:16:49.993 "io_queue_requests": 0, 00:16:49.993 "delay_cmd_submit": true, 00:16:49.993 "transport_retry_count": 4, 00:16:49.993 "bdev_retry_count": 3, 00:16:49.993 "transport_ack_timeout": 0, 00:16:49.993 "ctrlr_loss_timeout_sec": 0, 00:16:49.993 "reconnect_delay_sec": 0, 00:16:49.993 "fast_io_fail_timeout_sec": 0, 00:16:49.993 "disable_auto_failback": false, 00:16:49.993 "generate_uuids": false, 00:16:49.993 "transport_tos": 0, 00:16:49.993 "nvme_error_stat": false, 00:16:49.993 "rdma_srq_size": 0, 00:16:49.993 "io_path_stat": false, 00:16:49.993 "allow_accel_sequence": false, 00:16:49.993 "rdma_max_cq_size": 0, 00:16:49.993 "rdma_cm_event_timeout_ms": 0, 00:16:49.993 "dhchap_digests": [ 00:16:49.993 "sha256", 00:16:49.993 "sha384", 00:16:49.993 "sha512" 00:16:49.993 ], 00:16:49.993 "dhchap_dhgroups": [ 00:16:49.993 "null", 00:16:49.993 "ffdhe2048", 00:16:49.993 "ffdhe3072", 00:16:49.993 "ffdhe4096", 00:16:49.993 "ffdhe6144", 00:16:49.993 "ffdhe8192" 00:16:49.993 ] 00:16:49.993 } 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "method": "bdev_nvme_set_hotplug", 00:16:49.993 "params": { 00:16:49.993 "period_us": 100000, 00:16:49.993 "enable": false 00:16:49.993 } 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "method": "bdev_malloc_create", 00:16:49.993 "params": { 00:16:49.993 "name": "malloc0", 00:16:49.993 "num_blocks": 8192, 00:16:49.993 "block_size": 4096, 00:16:49.993 "physical_block_size": 4096, 00:16:49.993 "uuid": "717edddd-7dbf-4e19-80c2-369d7e5f9e74", 00:16:49.993 "optimal_io_boundary": 0, 00:16:49.993 "md_size": 0, 00:16:49.993 "dif_type": 0, 00:16:49.993 "dif_is_head_of_md": false, 00:16:49.993 "dif_pi_format": 0 00:16:49.993 } 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "method": "bdev_wait_for_examine" 00:16:49.993 } 00:16:49.993 ] 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "subsystem": "scsi", 00:16:49.993 "config": null 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "subsystem": "scheduler", 00:16:49.993 "config": [ 00:16:49.993 { 00:16:49.993 "method": "framework_set_scheduler", 00:16:49.993 "params": { 00:16:49.993 "name": "static" 00:16:49.993 } 00:16:49.993 } 00:16:49.993 ] 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "subsystem": "vhost_scsi", 00:16:49.993 "config": [] 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "subsystem": "vhost_blk", 00:16:49.993 "config": [] 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "subsystem": "ublk", 00:16:49.993 "config": [ 00:16:49.993 { 00:16:49.993 "method": "ublk_create_target", 00:16:49.993 "params": { 00:16:49.993 "cpumask": "1" 00:16:49.993 } 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "method": "ublk_start_disk", 00:16:49.993 "params": { 00:16:49.993 "bdev_name": "malloc0", 00:16:49.993 "ublk_id": 0, 00:16:49.993 "num_queues": 1, 00:16:49.993 "queue_depth": 128 00:16:49.993 } 00:16:49.993 } 00:16:49.993 ] 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "subsystem": "nbd", 00:16:49.993 "config": [] 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "subsystem": "nvmf", 00:16:49.993 "config": [ 00:16:49.993 { 00:16:49.993 "method": "nvmf_set_config", 00:16:49.993 "params": { 00:16:49.993 "discovery_filter": "match_any", 00:16:49.993 "admin_cmd_passthru": { 00:16:49.993 "identify_ctrlr": false 00:16:49.993 }, 00:16:49.993 "dhchap_digests": [ 00:16:49.993 "sha256", 00:16:49.993 "sha384", 00:16:49.993 "sha512" 00:16:49.993 ], 00:16:49.993 "dhchap_dhgroups": [ 00:16:49.993 "null", 00:16:49.993 "ffdhe2048", 00:16:49.993 "ffdhe3072", 00:16:49.993 "ffdhe4096", 00:16:49.993 "ffdhe6144", 00:16:49.993 "ffdhe8192" 00:16:49.993 ] 00:16:49.993 } 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "method": "nvmf_set_max_subsystems", 00:16:49.993 "params": { 00:16:49.993 "max_subsystems": 1024 00:16:49.993 } 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "method": "nvmf_set_crdt", 00:16:49.993 "params": { 00:16:49.993 "crdt1": 0, 00:16:49.993 "crdt2": 0, 00:16:49.993 "crdt3": 0 00:16:49.993 } 00:16:49.993 } 00:16:49.993 ] 00:16:49.993 }, 00:16:49.993 { 00:16:49.993 "subsystem": "iscsi", 00:16:49.993 "config": [ 00:16:49.993 { 00:16:49.993 "method": "iscsi_set_options", 00:16:49.993 "params": { 00:16:49.993 "node_base": "iqn.2016-06.io.spdk", 00:16:49.993 "max_sessions": 128, 00:16:49.993 "max_connections_per_session": 2, 00:16:49.993 "max_queue_depth": 64, 00:16:49.993 "default_time2wait": 2, 00:16:49.993 "default_time2retain": 20, 00:16:49.993 "first_burst_length": 8192, 00:16:49.993 "immediate_data": true, 00:16:49.993 "allow_duplicated_isid": false, 00:16:49.993 "error_recovery_level": 0, 00:16:49.993 "nop_timeout": 60, 00:16:49.993 "nop_in_interval": 30, 00:16:49.993 "disable_chap": false, 00:16:49.993 "require_chap": false, 00:16:49.993 "mutual_chap": false, 00:16:49.993 "chap_group": 0, 00:16:49.993 "max_large_datain_per_connection": 64, 00:16:49.993 "max_r2t_per_connection": 4, 00:16:49.993 "pdu_pool_size": 36864, 00:16:49.993 "immediate_data_pool_size": 16384, 00:16:49.993 "data_out_pool_size": 2048 00:16:49.993 } 00:16:49.993 } 00:16:49.993 ] 00:16:49.993 } 00:16:49.993 ] 00:16:49.993 }' 00:16:49.993 16:11:08 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72429 00:16:49.993 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72429 ']' 00:16:49.993 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72429 00:16:49.993 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:16:49.993 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:49.993 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72429 00:16:49.993 killing process with pid 72429 00:16:49.993 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:49.993 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:49.993 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72429' 00:16:49.993 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72429 00:16:49.993 16:11:08 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72429 00:16:51.367 [2024-11-04 16:11:10.033866] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:51.367 [2024-11-04 16:11:10.062848] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:51.367 [2024-11-04 16:11:10.062968] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:51.367 [2024-11-04 16:11:10.070786] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:51.367 [2024-11-04 16:11:10.070840] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:51.367 [2024-11-04 16:11:10.070855] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:51.367 [2024-11-04 16:11:10.070881] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:51.367 [2024-11-04 16:11:10.071022] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:53.277 16:11:11 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72495 00:16:53.277 16:11:11 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72495 00:16:53.277 16:11:11 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72495 ']' 00:16:53.277 16:11:11 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.277 16:11:11 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:53.277 16:11:11 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:53.277 16:11:11 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.277 16:11:11 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:53.277 "subsystems": [ 00:16:53.277 { 00:16:53.277 "subsystem": "fsdev", 00:16:53.277 "config": [ 00:16:53.277 { 00:16:53.277 "method": "fsdev_set_opts", 00:16:53.277 "params": { 00:16:53.277 "fsdev_io_pool_size": 65535, 00:16:53.277 "fsdev_io_cache_size": 256 00:16:53.277 } 00:16:53.277 } 00:16:53.277 ] 00:16:53.277 }, 00:16:53.277 { 00:16:53.277 "subsystem": "keyring", 00:16:53.277 "config": [] 00:16:53.277 }, 00:16:53.277 { 00:16:53.277 "subsystem": "iobuf", 00:16:53.277 "config": [ 00:16:53.277 { 00:16:53.277 "method": "iobuf_set_options", 00:16:53.277 "params": { 00:16:53.277 "small_pool_count": 8192, 00:16:53.277 "large_pool_count": 1024, 00:16:53.277 "small_bufsize": 8192, 00:16:53.277 "large_bufsize": 135168, 00:16:53.277 "enable_numa": false 00:16:53.277 } 00:16:53.277 } 00:16:53.277 ] 00:16:53.277 }, 00:16:53.277 { 00:16:53.277 "subsystem": "sock", 00:16:53.277 "config": [ 00:16:53.277 { 00:16:53.277 "method": "sock_set_default_impl", 00:16:53.277 "params": { 00:16:53.277 "impl_name": "posix" 00:16:53.277 } 00:16:53.277 }, 00:16:53.277 { 00:16:53.277 "method": "sock_impl_set_options", 00:16:53.277 "params": { 00:16:53.277 "impl_name": "ssl", 00:16:53.277 "recv_buf_size": 4096, 00:16:53.277 "send_buf_size": 4096, 00:16:53.277 "enable_recv_pipe": true, 00:16:53.277 "enable_quickack": false, 00:16:53.277 "enable_placement_id": 0, 00:16:53.277 "enable_zerocopy_send_server": true, 00:16:53.277 "enable_zerocopy_send_client": false, 00:16:53.277 "zerocopy_threshold": 0, 00:16:53.277 "tls_version": 0, 00:16:53.277 "enable_ktls": false 00:16:53.277 } 00:16:53.277 }, 00:16:53.277 { 00:16:53.277 "method": "sock_impl_set_options", 00:16:53.277 "params": { 00:16:53.277 "impl_name": "posix", 00:16:53.277 "recv_buf_size": 2097152, 00:16:53.277 "send_buf_size": 2097152, 00:16:53.277 "enable_recv_pipe": true, 00:16:53.277 "enable_quickack": false, 00:16:53.277 "enable_placement_id": 0, 00:16:53.277 "enable_zerocopy_send_server": true, 00:16:53.277 "enable_zerocopy_send_client": false, 00:16:53.277 "zerocopy_threshold": 0, 00:16:53.277 "tls_version": 0, 00:16:53.277 "enable_ktls": false 00:16:53.277 } 00:16:53.277 } 00:16:53.277 ] 00:16:53.277 }, 00:16:53.277 { 00:16:53.277 "subsystem": "vmd", 00:16:53.277 "config": [] 00:16:53.277 }, 00:16:53.277 { 00:16:53.277 "subsystem": "accel", 00:16:53.277 "config": [ 00:16:53.277 { 00:16:53.277 "method": "accel_set_options", 00:16:53.277 "params": { 00:16:53.277 "small_cache_size": 128, 00:16:53.277 "large_cache_size": 16, 00:16:53.277 "task_count": 2048, 00:16:53.277 "sequence_count": 2048, 00:16:53.277 "buf_count": 2048 00:16:53.277 } 00:16:53.277 } 00:16:53.277 ] 00:16:53.277 }, 00:16:53.277 { 00:16:53.277 "subsystem": "bdev", 00:16:53.277 "config": [ 00:16:53.277 { 00:16:53.277 "method": "bdev_set_options", 00:16:53.277 "params": { 00:16:53.277 "bdev_io_pool_size": 65535, 00:16:53.277 "bdev_io_cache_size": 256, 00:16:53.277 "bdev_auto_examine": true, 00:16:53.277 "iobuf_small_cache_size": 128, 00:16:53.277 "iobuf_large_cache_size": 16 00:16:53.277 } 00:16:53.277 }, 00:16:53.277 { 00:16:53.277 "method": "bdev_raid_set_options", 00:16:53.277 "params": { 00:16:53.277 "process_window_size_kb": 1024, 00:16:53.277 "process_max_bandwidth_mb_sec": 0 00:16:53.277 } 00:16:53.277 }, 00:16:53.277 { 00:16:53.277 "method": "bdev_iscsi_set_options", 00:16:53.277 "params": { 00:16:53.277 "timeout_sec": 30 00:16:53.277 } 00:16:53.277 }, 00:16:53.277 { 00:16:53.277 "method": "bdev_nvme_set_options", 00:16:53.277 "params": { 00:16:53.277 "action_on_timeout": "none", 00:16:53.277 "timeout_us": 0, 00:16:53.277 "timeout_admin_us": 0, 00:16:53.277 "keep_alive_timeout_ms": 10000, 00:16:53.277 "arbitration_burst": 0, 00:16:53.278 "low_priority_weight": 0, 00:16:53.278 "medium_priority_weight": 0, 00:16:53.278 "high_priority_weight": 0, 00:16:53.278 "nvme_adminq_poll_period_us": 10000, 00:16:53.278 "nvme_ioq_poll_period_us": 0, 00:16:53.278 "io_queue_requests": 0, 00:16:53.278 "delay_cmd_submit": true, 00:16:53.278 "transport_retry_count": 4, 00:16:53.278 "bdev_retry_count": 3, 00:16:53.278 "transport_ack_timeout": 0, 00:16:53.278 "ctrlr_loss_timeout_sec": 0, 00:16:53.278 "reconnect_delay_sec": 0, 00:16:53.278 "fast_io_fail_timeout_sec": 0, 00:16:53.278 "disable_auto_failback": false, 00:16:53.278 "generate_uuids": false, 00:16:53.278 "transport_tos": 0, 00:16:53.278 "nvme_error_stat": false, 00:16:53.278 "rdma_srq_size": 0, 00:16:53.278 "io_path_stat": false, 00:16:53.278 "allow_accel_sequence": false, 00:16:53.278 "rdma_max_cq_size": 0, 00:16:53.278 "rdma_cm_event_timeout_ms": 0, 00:16:53.278 "dhchap_digests": [ 00:16:53.278 "sha256", 00:16:53.278 "sha384", 00:16:53.278 "sha512" 00:16:53.278 ], 00:16:53.278 "dhchap_dhgroups": [ 00:16:53.278 "null", 00:16:53.278 "ffdhe2048", 00:16:53.278 "ffdhe3072", 00:16:53.278 "ffdhe4096", 00:16:53.278 "ffdhe6144", 00:16:53.278 "ffdhe8192" 00:16:53.278 ] 00:16:53.278 } 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "method": "bdev_nvme_set_hotplug", 00:16:53.278 "params": { 00:16:53.278 "period_us": 100000, 00:16:53.278 "enable": false 00:16:53.278 } 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "method": "bdev_malloc_create", 00:16:53.278 "params": { 00:16:53.278 "name": "malloc0", 00:16:53.278 "num_blocks": 8192, 00:16:53.278 "block_size": 4096, 00:16:53.278 "physical_block_size": 4096, 00:16:53.278 "uuid": "717edddd-7dbf-4e19-80c2-369d7e5f9e74", 00:16:53.278 "optimal_io_boundary": 0, 00:16:53.278 "md_size": 0, 00:16:53.278 "dif_type": 0, 00:16:53.278 "dif_is_head_of_md": false, 00:16:53.278 "dif_pi_format": 0 00:16:53.278 } 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "method": "bdev_wait_for_examine" 00:16:53.278 } 00:16:53.278 ] 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "subsystem": "scsi", 00:16:53.278 "config": null 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "subsystem": "scheduler", 00:16:53.278 "config": [ 00:16:53.278 { 00:16:53.278 "method": "framework_set_scheduler", 00:16:53.278 "params": { 00:16:53.278 "name": "static" 00:16:53.278 } 00:16:53.278 } 00:16:53.278 ] 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "subsystem": "vhost_scsi", 00:16:53.278 "config": [] 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "subsystem": "vhost_blk", 00:16:53.278 "config": [] 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "subsystem": "ublk", 00:16:53.278 "config": [ 00:16:53.278 { 00:16:53.278 "method": "ublk_create_target", 00:16:53.278 "params": { 00:16:53.278 "cpumask": "1" 00:16:53.278 } 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "method": "ublk_start_disk", 00:16:53.278 "params": { 00:16:53.278 "bdev_name": "malloc0", 00:16:53.278 "ublk_id": 0, 00:16:53.278 "num_queues": 1, 00:16:53.278 "queue_depth": 128 00:16:53.278 } 00:16:53.278 } 00:16:53.278 ] 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "subsystem": "nbd", 00:16:53.278 "config": [] 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "subsystem": "nvmf", 00:16:53.278 "config": [ 00:16:53.278 { 00:16:53.278 "method": "nvmf_set_config", 00:16:53.278 "params": { 00:16:53.278 "discovery_filter": "match_any", 00:16:53.278 "admin_cmd_passthru": { 00:16:53.278 "identify_ctrlr": false 00:16:53.278 }, 00:16:53.278 "dhchap_digests": [ 00:16:53.278 "sha256", 00:16:53.278 "sha384", 00:16:53.278 "sha512" 00:16:53.278 ], 00:16:53.278 "dhchap_dhgroups": [ 00:16:53.278 "null", 00:16:53.278 "ffdhe2048", 00:16:53.278 "ffdhe3072", 00:16:53.278 "ffdhe4096", 00:16:53.278 "ffdhe6144", 00:16:53.278 "ffdhe81 16:11:11 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:53.278 16:11:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:53.278 92" 00:16:53.278 ] 00:16:53.278 } 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "method": "nvmf_set_max_subsystems", 00:16:53.278 "params": { 00:16:53.278 "max_subsystems": 1024 00:16:53.278 } 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "method": "nvmf_set_crdt", 00:16:53.278 "params": { 00:16:53.278 "crdt1": 0, 00:16:53.278 "crdt2": 0, 00:16:53.278 "crdt3": 0 00:16:53.278 } 00:16:53.278 } 00:16:53.278 ] 00:16:53.278 }, 00:16:53.278 { 00:16:53.278 "subsystem": "iscsi", 00:16:53.278 "config": [ 00:16:53.278 { 00:16:53.278 "method": "iscsi_set_options", 00:16:53.278 "params": { 00:16:53.278 "node_base": "iqn.2016-06.io.spdk", 00:16:53.278 "max_sessions": 128, 00:16:53.278 "max_connections_per_session": 2, 00:16:53.278 "max_queue_depth": 64, 00:16:53.278 "default_time2wait": 2, 00:16:53.278 "default_time2retain": 20, 00:16:53.278 "first_burst_length": 8192, 00:16:53.278 "immediate_data": true, 00:16:53.278 "allow_duplicated_isid": false, 00:16:53.278 "error_recovery_level": 0, 00:16:53.278 "nop_timeout": 60, 00:16:53.278 "nop_in_interval": 30, 00:16:53.278 "disable_chap": false, 00:16:53.278 "require_chap": false, 00:16:53.278 "mutual_chap": false, 00:16:53.278 "chap_group": 0, 00:16:53.278 "max_large_datain_per_connection": 64, 00:16:53.278 "max_r2t_per_connection": 4, 00:16:53.278 "pdu_pool_size": 36864, 00:16:53.278 "immediate_data_pool_size": 16384, 00:16:53.278 "data_out_pool_size": 2048 00:16:53.278 } 00:16:53.278 } 00:16:53.278 ] 00:16:53.278 } 00:16:53.278 ] 00:16:53.278 }' 00:16:53.537 [2024-11-04 16:11:12.017465] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:16:53.537 [2024-11-04 16:11:12.017593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72495 ] 00:16:53.537 [2024-11-04 16:11:12.199823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.796 [2024-11-04 16:11:12.311562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.732 [2024-11-04 16:11:13.356767] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:54.732 [2024-11-04 16:11:13.357944] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:54.732 [2024-11-04 16:11:13.364893] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:54.732 [2024-11-04 16:11:13.364978] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:54.732 [2024-11-04 16:11:13.364996] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:54.732 [2024-11-04 16:11:13.365004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:54.732 [2024-11-04 16:11:13.373836] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:54.732 [2024-11-04 16:11:13.373861] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:54.732 [2024-11-04 16:11:13.379804] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:54.732 [2024-11-04 16:11:13.379895] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:54.732 [2024-11-04 16:11:13.396771] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:54.732 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:54.732 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:16:54.732 16:11:13 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72495 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72495 ']' 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72495 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72495 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72495' 00:16:54.991 killing process with pid 72495 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72495 00:16:54.991 16:11:13 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72495 00:16:56.379 [2024-11-04 16:11:15.091551] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:56.637 [2024-11-04 16:11:15.128789] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:56.637 [2024-11-04 16:11:15.128936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:56.637 [2024-11-04 16:11:15.137783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:56.637 [2024-11-04 16:11:15.137830] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:56.637 [2024-11-04 16:11:15.137839] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:56.637 [2024-11-04 16:11:15.137864] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:56.637 [2024-11-04 16:11:15.138003] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:58.539 16:11:16 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:58.539 00:16:58.539 real 0m10.004s 00:16:58.539 user 0m7.633s 00:16:58.539 sys 0m3.056s 00:16:58.539 16:11:16 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:58.539 16:11:16 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:58.539 ************************************ 00:16:58.539 END TEST test_save_ublk_config 00:16:58.540 ************************************ 00:16:58.540 16:11:17 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72586 00:16:58.540 16:11:17 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:58.540 16:11:17 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72586 00:16:58.540 16:11:17 ublk -- common/autotest_common.sh@833 -- # '[' -z 72586 ']' 00:16:58.540 16:11:17 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:58.540 16:11:17 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.540 16:11:17 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:58.540 16:11:17 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.540 16:11:17 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:58.540 16:11:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:58.540 [2024-11-04 16:11:17.144682] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:16:58.540 [2024-11-04 16:11:17.144832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72586 ] 00:16:58.798 [2024-11-04 16:11:17.325394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:58.798 [2024-11-04 16:11:17.434178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.798 [2024-11-04 16:11:17.434215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.735 16:11:18 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:59.735 16:11:18 ublk -- common/autotest_common.sh@866 -- # return 0 00:16:59.735 16:11:18 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:59.735 16:11:18 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:59.735 16:11:18 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:59.735 16:11:18 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.735 ************************************ 00:16:59.735 START TEST test_create_ublk 00:16:59.735 ************************************ 00:16:59.735 16:11:18 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:16:59.735 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:59.735 16:11:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.735 16:11:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.735 [2024-11-04 16:11:18.316776] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:59.735 [2024-11-04 16:11:18.319126] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:59.735 16:11:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.735 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:59.735 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:59.735 16:11:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.735 16:11:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.994 16:11:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.994 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:59.994 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:59.994 16:11:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.994 16:11:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.994 [2024-11-04 16:11:18.602923] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:59.994 [2024-11-04 16:11:18.603363] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:59.994 [2024-11-04 16:11:18.603384] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:59.994 [2024-11-04 16:11:18.603393] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:59.994 [2024-11-04 16:11:18.610796] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:59.994 [2024-11-04 16:11:18.610819] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:59.994 [2024-11-04 16:11:18.618782] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:59.994 [2024-11-04 16:11:18.624842] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:59.994 [2024-11-04 16:11:18.644788] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:59.994 16:11:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.994 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:59.994 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:59.994 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:59.994 16:11:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.994 16:11:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.994 16:11:18 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.994 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:59.994 { 00:16:59.994 "ublk_device": "/dev/ublkb0", 00:16:59.994 "id": 0, 00:16:59.994 "queue_depth": 512, 00:16:59.994 "num_queues": 4, 00:16:59.994 "bdev_name": "Malloc0" 00:16:59.994 } 00:16:59.994 ]' 00:16:59.994 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:59.994 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:59.994 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:00.253 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:00.253 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:00.253 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:00.253 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:00.253 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:00.253 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:00.253 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:00.253 16:11:18 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:00.253 16:11:18 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:00.253 16:11:18 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:00.253 16:11:18 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:00.253 16:11:18 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:00.253 16:11:18 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:00.253 16:11:18 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:00.253 16:11:18 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:00.253 16:11:18 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:00.253 16:11:18 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:00.253 16:11:18 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:00.253 16:11:18 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:00.512 fio: verification read phase will never start because write phase uses all of runtime 00:17:00.512 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:00.512 fio-3.35 00:17:00.512 Starting 1 process 00:17:10.490 00:17:10.490 fio_test: (groupid=0, jobs=1): err= 0: pid=72638: Mon Nov 4 16:11:29 2024 00:17:10.490 write: IOPS=16.7k, BW=65.1MiB/s (68.2MB/s)(651MiB/10001msec); 0 zone resets 00:17:10.490 clat (usec): min=37, max=3970, avg=59.20, stdev=99.78 00:17:10.490 lat (usec): min=37, max=3971, avg=59.66, stdev=99.79 00:17:10.490 clat percentiles (usec): 00:17:10.490 | 1.00th=[ 40], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 53], 00:17:10.490 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 55], 60.00th=[ 56], 00:17:10.490 | 70.00th=[ 57], 80.00th=[ 58], 90.00th=[ 61], 95.00th=[ 64], 00:17:10.490 | 99.00th=[ 73], 99.50th=[ 79], 99.90th=[ 2180], 99.95th=[ 2868], 00:17:10.490 | 99.99th=[ 3589] 00:17:10.490 bw ( KiB/s): min=65144, max=75840, per=100.00%, avg=66750.11, stdev=2253.26, samples=19 00:17:10.490 iops : min=16286, max=18960, avg=16687.53, stdev=563.31, samples=19 00:17:10.490 lat (usec) : 50=4.16%, 100=95.61%, 250=0.03%, 500=0.01%, 750=0.01% 00:17:10.490 lat (usec) : 1000=0.02% 00:17:10.490 lat (msec) : 2=0.05%, 4=0.11% 00:17:10.490 cpu : usr=3.67%, sys=10.52%, ctx=166582, majf=0, minf=795 00:17:10.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.490 issued rwts: total=0,166579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.490 00:17:10.490 Run status group 0 (all jobs): 00:17:10.490 WRITE: bw=65.1MiB/s (68.2MB/s), 65.1MiB/s-65.1MiB/s (68.2MB/s-68.2MB/s), io=651MiB (682MB), run=10001-10001msec 00:17:10.490 00:17:10.490 Disk stats (read/write): 00:17:10.490 ublkb0: ios=0/164870, merge=0/0, ticks=0/8541, in_queue=8542, util=99.13% 00:17:10.490 16:11:29 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.490 [2024-11-04 16:11:29.133588] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:10.490 [2024-11-04 16:11:29.163197] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:10.490 [2024-11-04 16:11:29.164070] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:10.490 [2024-11-04 16:11:29.172783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:10.490 [2024-11-04 16:11:29.173049] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:10.490 [2024-11-04 16:11:29.173064] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.490 16:11:29 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.490 [2024-11-04 16:11:29.196849] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:10.490 request: 00:17:10.490 { 00:17:10.490 "ublk_id": 0, 00:17:10.490 "method": "ublk_stop_disk", 00:17:10.490 "req_id": 1 00:17:10.490 } 00:17:10.490 Got JSON-RPC error response 00:17:10.490 response: 00:17:10.490 { 00:17:10.490 "code": -19, 00:17:10.490 "message": "No such device" 00:17:10.490 } 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:10.490 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:10.749 16:11:29 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:10.749 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.749 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.749 [2024-11-04 16:11:29.220879] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:10.749 [2024-11-04 16:11:29.228765] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:10.749 [2024-11-04 16:11:29.228810] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:10.749 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.749 16:11:29 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:10.749 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.749 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.317 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.318 16:11:29 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:11.318 16:11:29 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:11.318 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.318 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.318 16:11:29 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.318 16:11:29 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:11.318 16:11:29 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:11.318 16:11:30 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:11.318 16:11:30 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:11.318 16:11:30 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.318 16:11:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.318 16:11:30 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.318 16:11:30 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:11.318 16:11:30 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:11.576 ************************************ 00:17:11.576 END TEST test_create_ublk 00:17:11.576 ************************************ 00:17:11.576 16:11:30 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:11.576 00:17:11.576 real 0m11.755s 00:17:11.576 user 0m0.735s 00:17:11.576 sys 0m1.174s 00:17:11.576 16:11:30 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:11.576 16:11:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.576 16:11:30 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:11.576 16:11:30 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:11.576 16:11:30 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:11.576 16:11:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.576 ************************************ 00:17:11.576 START TEST test_create_multi_ublk 00:17:11.576 ************************************ 00:17:11.577 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:17:11.577 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:11.577 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.577 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.577 [2024-11-04 16:11:30.156762] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:11.577 [2024-11-04 16:11:30.159397] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:11.577 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.577 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:11.577 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:11.577 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:11.577 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:11.577 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.577 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.835 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.835 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:11.835 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:11.835 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.835 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.835 [2024-11-04 16:11:30.434921] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:11.835 [2024-11-04 16:11:30.435358] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:11.835 [2024-11-04 16:11:30.435375] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:11.835 [2024-11-04 16:11:30.435388] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.835 [2024-11-04 16:11:30.444046] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.835 [2024-11-04 16:11:30.444070] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.835 [2024-11-04 16:11:30.450786] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.835 [2024-11-04 16:11:30.451365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:11.835 [2024-11-04 16:11:30.460847] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:11.835 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.835 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:11.835 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:11.835 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:11.835 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.835 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.094 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.094 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:12.094 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:12.094 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.094 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.094 [2024-11-04 16:11:30.747903] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:12.094 [2024-11-04 16:11:30.748337] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:12.094 [2024-11-04 16:11:30.748357] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:12.094 [2024-11-04 16:11:30.748366] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:12.094 [2024-11-04 16:11:30.759775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:12.094 [2024-11-04 16:11:30.759800] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:12.094 [2024-11-04 16:11:30.770782] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:12.094 [2024-11-04 16:11:30.771349] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:12.094 [2024-11-04 16:11:30.779816] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:12.094 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.094 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:12.094 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.094 16:11:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:12.094 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.094 16:11:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.353 16:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.353 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:12.353 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:12.353 16:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.353 16:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.612 [2024-11-04 16:11:31.078899] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:12.612 [2024-11-04 16:11:31.079332] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:12.612 [2024-11-04 16:11:31.079349] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:12.612 [2024-11-04 16:11:31.079360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:12.612 [2024-11-04 16:11:31.086798] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:12.612 [2024-11-04 16:11:31.086830] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:12.612 [2024-11-04 16:11:31.094770] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:12.612 [2024-11-04 16:11:31.095374] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:12.612 [2024-11-04 16:11:31.100196] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:12.612 16:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.612 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:12.612 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.612 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:12.612 16:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.612 16:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.871 [2024-11-04 16:11:31.402943] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:12.871 [2024-11-04 16:11:31.403374] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:12.871 [2024-11-04 16:11:31.403393] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:12.871 [2024-11-04 16:11:31.403401] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:12.871 [2024-11-04 16:11:31.407271] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:12.871 [2024-11-04 16:11:31.407295] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:12.871 [2024-11-04 16:11:31.417789] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:12.871 [2024-11-04 16:11:31.418348] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:12.871 [2024-11-04 16:11:31.434794] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:12.871 { 00:17:12.871 "ublk_device": "/dev/ublkb0", 00:17:12.871 "id": 0, 00:17:12.871 "queue_depth": 512, 00:17:12.871 "num_queues": 4, 00:17:12.871 "bdev_name": "Malloc0" 00:17:12.871 }, 00:17:12.871 { 00:17:12.871 "ublk_device": "/dev/ublkb1", 00:17:12.871 "id": 1, 00:17:12.871 "queue_depth": 512, 00:17:12.871 "num_queues": 4, 00:17:12.871 "bdev_name": "Malloc1" 00:17:12.871 }, 00:17:12.871 { 00:17:12.871 "ublk_device": "/dev/ublkb2", 00:17:12.871 "id": 2, 00:17:12.871 "queue_depth": 512, 00:17:12.871 "num_queues": 4, 00:17:12.871 "bdev_name": "Malloc2" 00:17:12.871 }, 00:17:12.871 { 00:17:12.871 "ublk_device": "/dev/ublkb3", 00:17:12.871 "id": 3, 00:17:12.871 "queue_depth": 512, 00:17:12.871 "num_queues": 4, 00:17:12.871 "bdev_name": "Malloc3" 00:17:12.871 } 00:17:12.871 ]' 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:12.871 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:13.129 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:13.388 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:13.388 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.388 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:13.388 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:13.388 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:13.388 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:13.388 16:11:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:13.388 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:13.388 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:13.388 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:13.388 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:13.388 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:13.388 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.388 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.647 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.647 [2024-11-04 16:11:32.326899] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.907 [2024-11-04 16:11:32.368202] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.907 [2024-11-04 16:11:32.369209] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.907 [2024-11-04 16:11:32.379810] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.907 [2024-11-04 16:11:32.380087] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:13.907 [2024-11-04 16:11:32.380103] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.907 [2024-11-04 16:11:32.395865] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.907 [2024-11-04 16:11:32.428147] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.907 [2024-11-04 16:11:32.429201] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.907 [2024-11-04 16:11:32.434785] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.907 [2024-11-04 16:11:32.435056] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:13.907 [2024-11-04 16:11:32.435072] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.907 [2024-11-04 16:11:32.450875] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.907 [2024-11-04 16:11:32.487204] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.907 [2024-11-04 16:11:32.488134] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.907 [2024-11-04 16:11:32.498792] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.907 [2024-11-04 16:11:32.499057] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:13.907 [2024-11-04 16:11:32.499075] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.907 [2024-11-04 16:11:32.514865] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.907 [2024-11-04 16:11:32.558801] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.907 [2024-11-04 16:11:32.559512] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.907 [2024-11-04 16:11:32.572788] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.907 [2024-11-04 16:11:32.573063] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:13.907 [2024-11-04 16:11:32.573077] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.907 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:14.166 [2024-11-04 16:11:32.760855] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:14.166 [2024-11-04 16:11:32.767766] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:14.166 [2024-11-04 16:11:32.767809] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:14.166 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:14.166 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:14.166 16:11:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:14.166 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.166 16:11:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.102 16:11:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.102 16:11:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:15.102 16:11:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:15.102 16:11:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.102 16:11:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.360 16:11:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.360 16:11:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:15.360 16:11:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:15.360 16:11:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.360 16:11:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.617 16:11:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.617 16:11:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:15.617 16:11:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:15.617 16:11:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.617 16:11:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:16.185 ************************************ 00:17:16.185 END TEST test_create_multi_ublk 00:17:16.185 ************************************ 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:16.185 00:17:16.185 real 0m4.615s 00:17:16.185 user 0m1.019s 00:17:16.185 sys 0m0.216s 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:16.185 16:11:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.185 16:11:34 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:16.185 16:11:34 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:16.185 16:11:34 ublk -- ublk/ublk.sh@130 -- # killprocess 72586 00:17:16.185 16:11:34 ublk -- common/autotest_common.sh@952 -- # '[' -z 72586 ']' 00:17:16.185 16:11:34 ublk -- common/autotest_common.sh@956 -- # kill -0 72586 00:17:16.185 16:11:34 ublk -- common/autotest_common.sh@957 -- # uname 00:17:16.185 16:11:34 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:16.185 16:11:34 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72586 00:17:16.185 killing process with pid 72586 00:17:16.185 16:11:34 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:16.185 16:11:34 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:16.185 16:11:34 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72586' 00:17:16.185 16:11:34 ublk -- common/autotest_common.sh@971 -- # kill 72586 00:17:16.185 16:11:34 ublk -- common/autotest_common.sh@976 -- # wait 72586 00:17:17.563 [2024-11-04 16:11:36.003257] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:17.563 [2024-11-04 16:11:36.003310] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:18.937 00:17:18.937 real 0m30.506s 00:17:18.937 user 0m43.894s 00:17:18.937 sys 0m10.257s 00:17:18.937 16:11:37 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:18.937 ************************************ 00:17:18.937 END TEST ublk 00:17:18.937 ************************************ 00:17:18.937 16:11:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:18.937 16:11:37 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:18.937 16:11:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:18.937 16:11:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:18.937 16:11:37 -- common/autotest_common.sh@10 -- # set +x 00:17:18.937 ************************************ 00:17:18.937 START TEST ublk_recovery 00:17:18.937 ************************************ 00:17:18.937 16:11:37 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:18.937 * Looking for test storage... 00:17:18.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:18.937 16:11:37 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:18.937 16:11:37 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:17:18.937 16:11:37 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:18.937 16:11:37 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.937 16:11:37 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.938 16:11:37 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.938 16:11:37 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:18.938 16:11:37 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.938 16:11:37 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:18.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.938 --rc genhtml_branch_coverage=1 00:17:18.938 --rc genhtml_function_coverage=1 00:17:18.938 --rc genhtml_legend=1 00:17:18.938 --rc geninfo_all_blocks=1 00:17:18.938 --rc geninfo_unexecuted_blocks=1 00:17:18.938 00:17:18.938 ' 00:17:18.938 16:11:37 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:18.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.938 --rc genhtml_branch_coverage=1 00:17:18.938 --rc genhtml_function_coverage=1 00:17:18.938 --rc genhtml_legend=1 00:17:18.938 --rc geninfo_all_blocks=1 00:17:18.938 --rc geninfo_unexecuted_blocks=1 00:17:18.938 00:17:18.938 ' 00:17:18.938 16:11:37 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:18.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.938 --rc genhtml_branch_coverage=1 00:17:18.938 --rc genhtml_function_coverage=1 00:17:18.938 --rc genhtml_legend=1 00:17:18.938 --rc geninfo_all_blocks=1 00:17:18.938 --rc geninfo_unexecuted_blocks=1 00:17:18.938 00:17:18.938 ' 00:17:18.938 16:11:37 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:18.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.938 --rc genhtml_branch_coverage=1 00:17:18.938 --rc genhtml_function_coverage=1 00:17:18.938 --rc genhtml_legend=1 00:17:18.938 --rc geninfo_all_blocks=1 00:17:18.938 --rc geninfo_unexecuted_blocks=1 00:17:18.938 00:17:18.938 ' 00:17:18.938 16:11:37 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:18.938 16:11:37 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:18.938 16:11:37 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:18.938 16:11:37 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:18.938 16:11:37 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:18.938 16:11:37 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:18.938 16:11:37 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:18.938 16:11:37 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:18.938 16:11:37 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:18.938 16:11:37 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:18.938 16:11:37 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73027 00:17:18.938 16:11:37 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:18.938 16:11:37 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.938 16:11:37 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73027 00:17:18.938 16:11:37 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 73027 ']' 00:17:18.938 16:11:37 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.938 16:11:37 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:18.938 16:11:37 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.938 16:11:37 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:18.938 16:11:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.938 [2024-11-04 16:11:37.641014] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:17:18.938 [2024-11-04 16:11:37.641345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73027 ] 00:17:19.195 [2024-11-04 16:11:37.820945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:19.452 [2024-11-04 16:11:37.933852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.452 [2024-11-04 16:11:37.933888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.385 16:11:38 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:20.385 16:11:38 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:17:20.385 16:11:38 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:20.385 16:11:38 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.385 16:11:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.385 [2024-11-04 16:11:38.808793] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:20.385 [2024-11-04 16:11:38.811634] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:20.385 16:11:38 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.385 16:11:38 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:20.385 16:11:38 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.385 16:11:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.385 malloc0 00:17:20.385 16:11:38 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.385 16:11:38 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:20.385 16:11:38 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.385 16:11:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.386 [2024-11-04 16:11:38.958933] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:20.386 [2024-11-04 16:11:38.959048] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:20.386 [2024-11-04 16:11:38.959063] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:20.386 [2024-11-04 16:11:38.959074] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:20.386 [2024-11-04 16:11:38.967910] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:20.386 [2024-11-04 16:11:38.967937] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:20.386 [2024-11-04 16:11:38.974793] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:20.386 [2024-11-04 16:11:38.974936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:20.386 [2024-11-04 16:11:38.985792] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:20.386 1 00:17:20.386 16:11:38 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.386 16:11:38 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:21.319 16:11:39 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73062 00:17:21.319 16:11:39 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:21.319 16:11:39 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:21.576 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:21.576 fio-3.35 00:17:21.576 Starting 1 process 00:17:26.844 16:11:45 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73027 00:17:26.844 16:11:45 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:32.121 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73027 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:32.121 16:11:50 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73176 00:17:32.121 16:11:50 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:32.121 16:11:50 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:32.121 16:11:50 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73176 00:17:32.121 16:11:50 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 73176 ']' 00:17:32.121 16:11:50 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.121 16:11:50 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:32.121 16:11:50 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.121 16:11:50 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:32.121 16:11:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.121 [2024-11-04 16:11:50.117472] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:17:32.121 [2024-11-04 16:11:50.117596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73176 ] 00:17:32.121 [2024-11-04 16:11:50.301014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:32.121 [2024-11-04 16:11:50.413993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.121 [2024-11-04 16:11:50.414024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.688 16:11:51 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:32.688 16:11:51 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:17:32.688 16:11:51 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:32.688 16:11:51 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.688 16:11:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.688 [2024-11-04 16:11:51.263769] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:32.688 [2024-11-04 16:11:51.266177] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:32.688 16:11:51 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.688 16:11:51 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:32.688 16:11:51 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.688 16:11:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.688 malloc0 00:17:32.688 16:11:51 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.688 16:11:51 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:32.688 16:11:51 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.688 16:11:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.688 [2024-11-04 16:11:51.399115] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:32.688 [2024-11-04 16:11:51.399161] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:32.688 [2024-11-04 16:11:51.399173] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:32.688 [2024-11-04 16:11:51.406806] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:32.688 [2024-11-04 16:11:51.406837] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:32.688 1 00:17:32.688 16:11:51 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.688 16:11:51 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73062 00:17:34.064 [2024-11-04 16:11:52.405791] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:34.064 [2024-11-04 16:11:52.413797] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:34.064 [2024-11-04 16:11:52.413822] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:35.000 [2024-11-04 16:11:53.412244] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:35.000 [2024-11-04 16:11:53.415774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:35.000 [2024-11-04 16:11:53.415794] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:35.936 [2024-11-04 16:11:54.414204] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:35.936 [2024-11-04 16:11:54.419779] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:35.936 [2024-11-04 16:11:54.419795] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:35.936 [2024-11-04 16:11:54.419808] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:35.936 [2024-11-04 16:11:54.419902] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:57.870 [2024-11-04 16:12:15.422775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:57.870 [2024-11-04 16:12:15.429223] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:57.870 [2024-11-04 16:12:15.434977] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:57.870 [2024-11-04 16:12:15.435003] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:24.406 00:18:24.406 fio_test: (groupid=0, jobs=1): err= 0: pid=73065: Mon Nov 4 16:12:40 2024 00:18:24.406 read: IOPS=12.3k, BW=48.0MiB/s (50.3MB/s)(2879MiB/60002msec) 00:18:24.406 slat (usec): min=2, max=197, avg= 7.20, stdev= 2.26 00:18:24.406 clat (usec): min=907, max=30444k, avg=5206.05, stdev=285846.01 00:18:24.406 lat (usec): min=913, max=30444k, avg=5213.25, stdev=285846.00 00:18:24.406 clat percentiles (usec): 00:18:24.406 | 1.00th=[ 1909], 5.00th=[ 2114], 10.00th=[ 2180], 20.00th=[ 2245], 00:18:24.406 | 30.00th=[ 2278], 40.00th=[ 2311], 50.00th=[ 2343], 60.00th=[ 2376], 00:18:24.406 | 70.00th=[ 2409], 80.00th=[ 2507], 90.00th=[ 3032], 95.00th=[ 3818], 00:18:24.406 | 99.00th=[ 5342], 99.50th=[ 5735], 99.90th=[ 7177], 99.95th=[ 7898], 00:18:24.406 | 99.99th=[11731] 00:18:24.407 bw ( KiB/s): min=35728, max=105744, per=100.00%, avg=98420.97, stdev=13085.92, samples=59 00:18:24.407 iops : min= 8932, max=26436, avg=24605.19, stdev=3271.49, samples=59 00:18:24.407 write: IOPS=12.3k, BW=47.9MiB/s (50.2MB/s)(2875MiB/60002msec); 0 zone resets 00:18:24.407 slat (usec): min=2, max=354, avg= 7.23, stdev= 2.35 00:18:24.407 clat (usec): min=862, max=30444k, avg=5206.51, stdev=281639.02 00:18:24.407 lat (usec): min=868, max=30444k, avg=5213.74, stdev=281639.01 00:18:24.407 clat percentiles (usec): 00:18:24.407 | 1.00th=[ 1926], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2343], 00:18:24.407 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:18:24.407 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 3130], 95.00th=[ 3785], 00:18:24.407 | 99.00th=[ 5342], 99.50th=[ 5866], 99.90th=[ 7177], 99.95th=[ 7898], 00:18:24.407 | 99.99th=[11731] 00:18:24.407 bw ( KiB/s): min=36960, max=105456, per=100.00%, avg=98299.15, stdev=12805.32, samples=59 00:18:24.407 iops : min= 9240, max=26364, avg=24574.75, stdev=3201.31, samples=59 00:18:24.407 lat (usec) : 1000=0.01% 00:18:24.407 lat (msec) : 2=1.95%, 4=93.91%, 10=4.12%, 20=0.01%, >=2000=0.01% 00:18:24.407 cpu : usr=6.67%, sys=17.91%, ctx=63436, majf=0, minf=13 00:18:24.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:24.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:24.407 issued rwts: total=737052,735876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.407 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:24.407 00:18:24.407 Run status group 0 (all jobs): 00:18:24.407 READ: bw=48.0MiB/s (50.3MB/s), 48.0MiB/s-48.0MiB/s (50.3MB/s-50.3MB/s), io=2879MiB (3019MB), run=60002-60002msec 00:18:24.407 WRITE: bw=47.9MiB/s (50.2MB/s), 47.9MiB/s-47.9MiB/s (50.2MB/s-50.2MB/s), io=2875MiB (3014MB), run=60002-60002msec 00:18:24.407 00:18:24.407 Disk stats (read/write): 00:18:24.407 ublkb1: ios=734078/733048, merge=0/0, ticks=3766941/3688657, in_queue=7455598, util=99.96% 00:18:24.407 16:12:40 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.407 [2024-11-04 16:12:40.281396] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:24.407 [2024-11-04 16:12:40.329800] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:24.407 [2024-11-04 16:12:40.330028] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:24.407 [2024-11-04 16:12:40.337791] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:24.407 [2024-11-04 16:12:40.337934] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:24.407 [2024-11-04 16:12:40.337946] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.407 16:12:40 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.407 [2024-11-04 16:12:40.353883] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:24.407 [2024-11-04 16:12:40.361774] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:24.407 [2024-11-04 16:12:40.361848] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.407 16:12:40 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:24.407 16:12:40 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:24.407 16:12:40 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73176 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 73176 ']' 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 73176 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73176 00:18:24.407 killing process with pid 73176 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73176' 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@971 -- # kill 73176 00:18:24.407 16:12:40 ublk_recovery -- common/autotest_common.sh@976 -- # wait 73176 00:18:24.407 [2024-11-04 16:12:41.966037] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:24.407 [2024-11-04 16:12:41.966099] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:24.666 ************************************ 00:18:24.666 END TEST ublk_recovery 00:18:24.666 ************************************ 00:18:24.666 00:18:24.666 real 1m6.009s 00:18:24.666 user 1m51.779s 00:18:24.666 sys 0m24.281s 00:18:24.666 16:12:43 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:24.666 16:12:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:24.666 16:12:43 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:18:24.666 16:12:43 -- spdk/autotest.sh@256 -- # timing_exit lib 00:18:24.666 16:12:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:24.666 16:12:43 -- common/autotest_common.sh@10 -- # set +x 00:18:24.925 16:12:43 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:18:24.925 16:12:43 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:18:24.925 16:12:43 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:18:24.925 16:12:43 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:18:24.925 16:12:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:24.925 16:12:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:24.925 16:12:43 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:18:24.925 16:12:43 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:18:24.925 16:12:43 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:18:24.925 16:12:43 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:18:24.925 16:12:43 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:24.925 16:12:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:24.925 16:12:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:24.925 16:12:43 -- common/autotest_common.sh@10 -- # set +x 00:18:24.925 ************************************ 00:18:24.925 START TEST ftl 00:18:24.925 ************************************ 00:18:24.925 16:12:43 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:24.925 * Looking for test storage... 00:18:24.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:24.925 16:12:43 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:24.925 16:12:43 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:18:24.925 16:12:43 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:25.185 16:12:43 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:25.185 16:12:43 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.185 16:12:43 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.185 16:12:43 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.185 16:12:43 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.185 16:12:43 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.185 16:12:43 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.185 16:12:43 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.185 16:12:43 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.185 16:12:43 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.185 16:12:43 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.185 16:12:43 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.185 16:12:43 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:25.185 16:12:43 ftl -- scripts/common.sh@345 -- # : 1 00:18:25.185 16:12:43 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.185 16:12:43 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.185 16:12:43 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:25.185 16:12:43 ftl -- scripts/common.sh@353 -- # local d=1 00:18:25.185 16:12:43 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.185 16:12:43 ftl -- scripts/common.sh@355 -- # echo 1 00:18:25.185 16:12:43 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.185 16:12:43 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:25.185 16:12:43 ftl -- scripts/common.sh@353 -- # local d=2 00:18:25.185 16:12:43 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.185 16:12:43 ftl -- scripts/common.sh@355 -- # echo 2 00:18:25.185 16:12:43 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.185 16:12:43 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.185 16:12:43 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.185 16:12:43 ftl -- scripts/common.sh@368 -- # return 0 00:18:25.185 16:12:43 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.185 16:12:43 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:25.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.185 --rc genhtml_branch_coverage=1 00:18:25.185 --rc genhtml_function_coverage=1 00:18:25.185 --rc genhtml_legend=1 00:18:25.185 --rc geninfo_all_blocks=1 00:18:25.185 --rc geninfo_unexecuted_blocks=1 00:18:25.185 00:18:25.185 ' 00:18:25.185 16:12:43 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:25.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.185 --rc genhtml_branch_coverage=1 00:18:25.185 --rc genhtml_function_coverage=1 00:18:25.185 --rc genhtml_legend=1 00:18:25.185 --rc geninfo_all_blocks=1 00:18:25.185 --rc geninfo_unexecuted_blocks=1 00:18:25.185 00:18:25.185 ' 00:18:25.185 16:12:43 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:25.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.185 --rc genhtml_branch_coverage=1 00:18:25.185 --rc genhtml_function_coverage=1 00:18:25.185 --rc genhtml_legend=1 00:18:25.185 --rc geninfo_all_blocks=1 00:18:25.185 --rc geninfo_unexecuted_blocks=1 00:18:25.185 00:18:25.185 ' 00:18:25.185 16:12:43 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:25.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.185 --rc genhtml_branch_coverage=1 00:18:25.185 --rc genhtml_function_coverage=1 00:18:25.185 --rc genhtml_legend=1 00:18:25.185 --rc geninfo_all_blocks=1 00:18:25.185 --rc geninfo_unexecuted_blocks=1 00:18:25.185 00:18:25.185 ' 00:18:25.185 16:12:43 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:25.185 16:12:43 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:25.185 16:12:43 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.185 16:12:43 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.185 16:12:43 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:25.185 16:12:43 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:25.185 16:12:43 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.185 16:12:43 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:25.185 16:12:43 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:25.185 16:12:43 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.185 16:12:43 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.185 16:12:43 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:25.185 16:12:43 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:25.185 16:12:43 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.185 16:12:43 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.185 16:12:43 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:25.185 16:12:43 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:25.185 16:12:43 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.185 16:12:43 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.185 16:12:43 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:25.185 16:12:43 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:25.185 16:12:43 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.185 16:12:43 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.185 16:12:43 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.185 16:12:43 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.185 16:12:43 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:25.185 16:12:43 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:25.185 16:12:43 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.185 16:12:43 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.185 16:12:43 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.185 16:12:43 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:25.185 16:12:43 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:25.185 16:12:43 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:25.185 16:12:43 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:25.185 16:12:43 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:25.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:25.822 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:25.822 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:25.822 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:25.822 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:26.082 16:12:44 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:26.082 16:12:44 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=73988 00:18:26.082 16:12:44 ftl -- ftl/ftl.sh@38 -- # waitforlisten 73988 00:18:26.082 16:12:44 ftl -- common/autotest_common.sh@833 -- # '[' -z 73988 ']' 00:18:26.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.082 16:12:44 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.082 16:12:44 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:26.082 16:12:44 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.082 16:12:44 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:26.082 16:12:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:26.082 [2024-11-04 16:12:44.654571] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:18:26.082 [2024-11-04 16:12:44.654919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73988 ] 00:18:26.343 [2024-11-04 16:12:44.833616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.343 [2024-11-04 16:12:44.938551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.916 16:12:45 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:26.916 16:12:45 ftl -- common/autotest_common.sh@866 -- # return 0 00:18:26.916 16:12:45 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:27.175 16:12:45 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:28.111 16:12:46 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:28.111 16:12:46 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:28.679 16:12:47 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:28.679 16:12:47 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:28.679 16:12:47 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:28.679 16:12:47 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:28.679 16:12:47 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:28.679 16:12:47 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:28.679 16:12:47 ftl -- ftl/ftl.sh@50 -- # break 00:18:28.679 16:12:47 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:28.679 16:12:47 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:28.679 16:12:47 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:28.679 16:12:47 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:28.938 16:12:47 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:28.939 16:12:47 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:28.939 16:12:47 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:28.939 16:12:47 ftl -- ftl/ftl.sh@63 -- # break 00:18:28.939 16:12:47 ftl -- ftl/ftl.sh@66 -- # killprocess 73988 00:18:28.939 16:12:47 ftl -- common/autotest_common.sh@952 -- # '[' -z 73988 ']' 00:18:28.939 16:12:47 ftl -- common/autotest_common.sh@956 -- # kill -0 73988 00:18:28.939 16:12:47 ftl -- common/autotest_common.sh@957 -- # uname 00:18:28.939 16:12:47 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:28.939 16:12:47 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73988 00:18:28.939 killing process with pid 73988 00:18:28.939 16:12:47 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:28.939 16:12:47 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:28.939 16:12:47 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73988' 00:18:28.939 16:12:47 ftl -- common/autotest_common.sh@971 -- # kill 73988 00:18:28.939 16:12:47 ftl -- common/autotest_common.sh@976 -- # wait 73988 00:18:31.475 16:12:49 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:31.475 16:12:49 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:31.475 16:12:49 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:31.475 16:12:49 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:31.475 16:12:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:31.475 ************************************ 00:18:31.475 START TEST ftl_fio_basic 00:18:31.475 ************************************ 00:18:31.475 16:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:31.475 * Looking for test storage... 00:18:31.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.475 16:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:31.475 16:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:18:31.475 16:12:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:31.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.475 --rc genhtml_branch_coverage=1 00:18:31.475 --rc genhtml_function_coverage=1 00:18:31.475 --rc genhtml_legend=1 00:18:31.475 --rc geninfo_all_blocks=1 00:18:31.475 --rc geninfo_unexecuted_blocks=1 00:18:31.475 00:18:31.475 ' 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:31.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.475 --rc genhtml_branch_coverage=1 00:18:31.475 --rc genhtml_function_coverage=1 00:18:31.475 --rc genhtml_legend=1 00:18:31.475 --rc geninfo_all_blocks=1 00:18:31.475 --rc geninfo_unexecuted_blocks=1 00:18:31.475 00:18:31.475 ' 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:31.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.475 --rc genhtml_branch_coverage=1 00:18:31.475 --rc genhtml_function_coverage=1 00:18:31.475 --rc genhtml_legend=1 00:18:31.475 --rc geninfo_all_blocks=1 00:18:31.475 --rc geninfo_unexecuted_blocks=1 00:18:31.475 00:18:31.475 ' 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:31.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.475 --rc genhtml_branch_coverage=1 00:18:31.475 --rc genhtml_function_coverage=1 00:18:31.475 --rc genhtml_legend=1 00:18:31.475 --rc geninfo_all_blocks=1 00:18:31.475 --rc geninfo_unexecuted_blocks=1 00:18:31.475 00:18:31.475 ' 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:31.475 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74132 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74132 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 74132 ']' 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:31.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:31.476 16:12:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:31.476 [2024-11-04 16:12:50.169737] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:18:31.476 [2024-11-04 16:12:50.170574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74132 ] 00:18:31.735 [2024-11-04 16:12:50.349819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:31.993 [2024-11-04 16:12:50.466284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.993 [2024-11-04 16:12:50.466316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.993 [2024-11-04 16:12:50.466312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:32.931 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:33.190 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:33.190 { 00:18:33.190 "name": "nvme0n1", 00:18:33.190 "aliases": [ 00:18:33.190 "0e163036-94a0-4325-8794-771ab7996ab6" 00:18:33.190 ], 00:18:33.190 "product_name": "NVMe disk", 00:18:33.190 "block_size": 4096, 00:18:33.190 "num_blocks": 1310720, 00:18:33.190 "uuid": "0e163036-94a0-4325-8794-771ab7996ab6", 00:18:33.190 "numa_id": -1, 00:18:33.190 "assigned_rate_limits": { 00:18:33.190 "rw_ios_per_sec": 0, 00:18:33.190 "rw_mbytes_per_sec": 0, 00:18:33.190 "r_mbytes_per_sec": 0, 00:18:33.190 "w_mbytes_per_sec": 0 00:18:33.190 }, 00:18:33.190 "claimed": false, 00:18:33.190 "zoned": false, 00:18:33.190 "supported_io_types": { 00:18:33.190 "read": true, 00:18:33.190 "write": true, 00:18:33.190 "unmap": true, 00:18:33.190 "flush": true, 00:18:33.190 "reset": true, 00:18:33.190 "nvme_admin": true, 00:18:33.190 "nvme_io": true, 00:18:33.190 "nvme_io_md": false, 00:18:33.190 "write_zeroes": true, 00:18:33.190 "zcopy": false, 00:18:33.190 "get_zone_info": false, 00:18:33.190 "zone_management": false, 00:18:33.190 "zone_append": false, 00:18:33.190 "compare": true, 00:18:33.190 "compare_and_write": false, 00:18:33.190 "abort": true, 00:18:33.190 "seek_hole": false, 00:18:33.190 "seek_data": false, 00:18:33.190 "copy": true, 00:18:33.190 "nvme_iov_md": false 00:18:33.190 }, 00:18:33.190 "driver_specific": { 00:18:33.190 "nvme": [ 00:18:33.190 { 00:18:33.190 "pci_address": "0000:00:11.0", 00:18:33.190 "trid": { 00:18:33.190 "trtype": "PCIe", 00:18:33.190 "traddr": "0000:00:11.0" 00:18:33.190 }, 00:18:33.190 "ctrlr_data": { 00:18:33.190 "cntlid": 0, 00:18:33.190 "vendor_id": "0x1b36", 00:18:33.190 "model_number": "QEMU NVMe Ctrl", 00:18:33.190 "serial_number": "12341", 00:18:33.190 "firmware_revision": "8.0.0", 00:18:33.190 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:33.190 "oacs": { 00:18:33.190 "security": 0, 00:18:33.190 "format": 1, 00:18:33.190 "firmware": 0, 00:18:33.190 "ns_manage": 1 00:18:33.190 }, 00:18:33.190 "multi_ctrlr": false, 00:18:33.190 "ana_reporting": false 00:18:33.190 }, 00:18:33.190 "vs": { 00:18:33.190 "nvme_version": "1.4" 00:18:33.190 }, 00:18:33.190 "ns_data": { 00:18:33.190 "id": 1, 00:18:33.190 "can_share": false 00:18:33.190 } 00:18:33.190 } 00:18:33.190 ], 00:18:33.190 "mp_policy": "active_passive" 00:18:33.190 } 00:18:33.190 } 00:18:33.190 ]' 00:18:33.190 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:33.190 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:33.190 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:33.190 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:18:33.190 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:18:33.190 16:12:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:18:33.190 16:12:51 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:33.190 16:12:51 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:33.190 16:12:51 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:33.190 16:12:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:33.190 16:12:51 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:33.449 16:12:52 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:33.449 16:12:52 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:33.708 16:12:52 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=baa399dc-cd4b-4579-8dc9-b4cb372b2adb 00:18:33.709 16:12:52 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u baa399dc-cd4b-4579-8dc9-b4cb372b2adb 00:18:33.967 16:12:52 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=45a44274-716a-4b51-8346-8b6317f967ac 00:18:33.967 16:12:52 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 45a44274-716a-4b51-8346-8b6317f967ac 00:18:33.967 16:12:52 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:33.967 16:12:52 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:33.967 16:12:52 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=45a44274-716a-4b51-8346-8b6317f967ac 00:18:33.967 16:12:52 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:33.967 16:12:52 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 45a44274-716a-4b51-8346-8b6317f967ac 00:18:33.968 16:12:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=45a44274-716a-4b51-8346-8b6317f967ac 00:18:33.968 16:12:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:33.968 16:12:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:33.968 16:12:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:33.968 16:12:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45a44274-716a-4b51-8346-8b6317f967ac 00:18:34.226 16:12:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:34.226 { 00:18:34.226 "name": "45a44274-716a-4b51-8346-8b6317f967ac", 00:18:34.226 "aliases": [ 00:18:34.226 "lvs/nvme0n1p0" 00:18:34.227 ], 00:18:34.227 "product_name": "Logical Volume", 00:18:34.227 "block_size": 4096, 00:18:34.227 "num_blocks": 26476544, 00:18:34.227 "uuid": "45a44274-716a-4b51-8346-8b6317f967ac", 00:18:34.227 "assigned_rate_limits": { 00:18:34.227 "rw_ios_per_sec": 0, 00:18:34.227 "rw_mbytes_per_sec": 0, 00:18:34.227 "r_mbytes_per_sec": 0, 00:18:34.227 "w_mbytes_per_sec": 0 00:18:34.227 }, 00:18:34.227 "claimed": false, 00:18:34.227 "zoned": false, 00:18:34.227 "supported_io_types": { 00:18:34.227 "read": true, 00:18:34.227 "write": true, 00:18:34.227 "unmap": true, 00:18:34.227 "flush": false, 00:18:34.227 "reset": true, 00:18:34.227 "nvme_admin": false, 00:18:34.227 "nvme_io": false, 00:18:34.227 "nvme_io_md": false, 00:18:34.227 "write_zeroes": true, 00:18:34.227 "zcopy": false, 00:18:34.227 "get_zone_info": false, 00:18:34.227 "zone_management": false, 00:18:34.227 "zone_append": false, 00:18:34.227 "compare": false, 00:18:34.227 "compare_and_write": false, 00:18:34.227 "abort": false, 00:18:34.227 "seek_hole": true, 00:18:34.227 "seek_data": true, 00:18:34.227 "copy": false, 00:18:34.227 "nvme_iov_md": false 00:18:34.227 }, 00:18:34.227 "driver_specific": { 00:18:34.227 "lvol": { 00:18:34.227 "lvol_store_uuid": "baa399dc-cd4b-4579-8dc9-b4cb372b2adb", 00:18:34.227 "base_bdev": "nvme0n1", 00:18:34.227 "thin_provision": true, 00:18:34.227 "num_allocated_clusters": 0, 00:18:34.227 "snapshot": false, 00:18:34.227 "clone": false, 00:18:34.227 "esnap_clone": false 00:18:34.227 } 00:18:34.227 } 00:18:34.227 } 00:18:34.227 ]' 00:18:34.227 16:12:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:34.227 16:12:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:34.227 16:12:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:34.227 16:12:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:34.227 16:12:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:34.227 16:12:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:18:34.227 16:12:52 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:34.227 16:12:52 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:34.227 16:12:52 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:34.486 16:12:53 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:34.486 16:12:53 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:34.486 16:12:53 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 45a44274-716a-4b51-8346-8b6317f967ac 00:18:34.486 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=45a44274-716a-4b51-8346-8b6317f967ac 00:18:34.486 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:34.486 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:34.486 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:34.486 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45a44274-716a-4b51-8346-8b6317f967ac 00:18:34.745 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:34.745 { 00:18:34.745 "name": "45a44274-716a-4b51-8346-8b6317f967ac", 00:18:34.745 "aliases": [ 00:18:34.745 "lvs/nvme0n1p0" 00:18:34.745 ], 00:18:34.745 "product_name": "Logical Volume", 00:18:34.745 "block_size": 4096, 00:18:34.745 "num_blocks": 26476544, 00:18:34.745 "uuid": "45a44274-716a-4b51-8346-8b6317f967ac", 00:18:34.745 "assigned_rate_limits": { 00:18:34.745 "rw_ios_per_sec": 0, 00:18:34.745 "rw_mbytes_per_sec": 0, 00:18:34.745 "r_mbytes_per_sec": 0, 00:18:34.745 "w_mbytes_per_sec": 0 00:18:34.745 }, 00:18:34.745 "claimed": false, 00:18:34.745 "zoned": false, 00:18:34.745 "supported_io_types": { 00:18:34.745 "read": true, 00:18:34.745 "write": true, 00:18:34.745 "unmap": true, 00:18:34.745 "flush": false, 00:18:34.745 "reset": true, 00:18:34.745 "nvme_admin": false, 00:18:34.745 "nvme_io": false, 00:18:34.745 "nvme_io_md": false, 00:18:34.745 "write_zeroes": true, 00:18:34.745 "zcopy": false, 00:18:34.745 "get_zone_info": false, 00:18:34.745 "zone_management": false, 00:18:34.745 "zone_append": false, 00:18:34.745 "compare": false, 00:18:34.745 "compare_and_write": false, 00:18:34.745 "abort": false, 00:18:34.745 "seek_hole": true, 00:18:34.745 "seek_data": true, 00:18:34.745 "copy": false, 00:18:34.745 "nvme_iov_md": false 00:18:34.745 }, 00:18:34.745 "driver_specific": { 00:18:34.745 "lvol": { 00:18:34.745 "lvol_store_uuid": "baa399dc-cd4b-4579-8dc9-b4cb372b2adb", 00:18:34.745 "base_bdev": "nvme0n1", 00:18:34.745 "thin_provision": true, 00:18:34.745 "num_allocated_clusters": 0, 00:18:34.745 "snapshot": false, 00:18:34.745 "clone": false, 00:18:34.745 "esnap_clone": false 00:18:34.745 } 00:18:34.745 } 00:18:34.745 } 00:18:34.745 ]' 00:18:34.745 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:34.745 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:34.745 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:34.745 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:34.745 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:34.745 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:18:34.745 16:12:53 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:34.745 16:12:53 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:35.004 16:12:53 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:35.004 16:12:53 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:35.004 16:12:53 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:35.004 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:35.004 16:12:53 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 45a44274-716a-4b51-8346-8b6317f967ac 00:18:35.004 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=45a44274-716a-4b51-8346-8b6317f967ac 00:18:35.004 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:35.004 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:35.004 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:35.004 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 45a44274-716a-4b51-8346-8b6317f967ac 00:18:35.262 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:35.262 { 00:18:35.262 "name": "45a44274-716a-4b51-8346-8b6317f967ac", 00:18:35.262 "aliases": [ 00:18:35.262 "lvs/nvme0n1p0" 00:18:35.262 ], 00:18:35.262 "product_name": "Logical Volume", 00:18:35.262 "block_size": 4096, 00:18:35.262 "num_blocks": 26476544, 00:18:35.262 "uuid": "45a44274-716a-4b51-8346-8b6317f967ac", 00:18:35.262 "assigned_rate_limits": { 00:18:35.262 "rw_ios_per_sec": 0, 00:18:35.262 "rw_mbytes_per_sec": 0, 00:18:35.262 "r_mbytes_per_sec": 0, 00:18:35.262 "w_mbytes_per_sec": 0 00:18:35.262 }, 00:18:35.262 "claimed": false, 00:18:35.262 "zoned": false, 00:18:35.262 "supported_io_types": { 00:18:35.262 "read": true, 00:18:35.262 "write": true, 00:18:35.262 "unmap": true, 00:18:35.262 "flush": false, 00:18:35.262 "reset": true, 00:18:35.262 "nvme_admin": false, 00:18:35.262 "nvme_io": false, 00:18:35.262 "nvme_io_md": false, 00:18:35.262 "write_zeroes": true, 00:18:35.262 "zcopy": false, 00:18:35.262 "get_zone_info": false, 00:18:35.262 "zone_management": false, 00:18:35.262 "zone_append": false, 00:18:35.262 "compare": false, 00:18:35.262 "compare_and_write": false, 00:18:35.262 "abort": false, 00:18:35.262 "seek_hole": true, 00:18:35.262 "seek_data": true, 00:18:35.262 "copy": false, 00:18:35.262 "nvme_iov_md": false 00:18:35.262 }, 00:18:35.262 "driver_specific": { 00:18:35.262 "lvol": { 00:18:35.262 "lvol_store_uuid": "baa399dc-cd4b-4579-8dc9-b4cb372b2adb", 00:18:35.262 "base_bdev": "nvme0n1", 00:18:35.262 "thin_provision": true, 00:18:35.262 "num_allocated_clusters": 0, 00:18:35.262 "snapshot": false, 00:18:35.262 "clone": false, 00:18:35.262 "esnap_clone": false 00:18:35.262 } 00:18:35.262 } 00:18:35.262 } 00:18:35.262 ]' 00:18:35.262 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:35.262 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:35.262 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:35.262 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:35.262 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:35.262 16:12:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:18:35.262 16:12:53 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:35.262 16:12:53 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:35.262 16:12:53 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 45a44274-716a-4b51-8346-8b6317f967ac -c nvc0n1p0 --l2p_dram_limit 60 00:18:35.522 [2024-11-04 16:12:54.059088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.522 [2024-11-04 16:12:54.059149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:35.522 [2024-11-04 16:12:54.059171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:35.522 [2024-11-04 16:12:54.059185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.522 [2024-11-04 16:12:54.059261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.522 [2024-11-04 16:12:54.059278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.522 [2024-11-04 16:12:54.059294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:35.522 [2024-11-04 16:12:54.059306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.522 [2024-11-04 16:12:54.059353] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:35.522 [2024-11-04 16:12:54.060404] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:35.522 [2024-11-04 16:12:54.060442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.522 [2024-11-04 16:12:54.060456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.522 [2024-11-04 16:12:54.060472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:18:35.522 [2024-11-04 16:12:54.060485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.522 [2024-11-04 16:12:54.060591] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2b93457f-9ff3-4cfa-b867-1e04882f0a55 00:18:35.522 [2024-11-04 16:12:54.062106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.522 [2024-11-04 16:12:54.062286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:35.522 [2024-11-04 16:12:54.062327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:35.522 [2024-11-04 16:12:54.062343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.522 [2024-11-04 16:12:54.070050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.522 [2024-11-04 16:12:54.070085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.522 [2024-11-04 16:12:54.070100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.643 ms 00:18:35.522 [2024-11-04 16:12:54.070115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.522 [2024-11-04 16:12:54.070243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.522 [2024-11-04 16:12:54.070265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.522 [2024-11-04 16:12:54.070279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:35.522 [2024-11-04 16:12:54.070298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.522 [2024-11-04 16:12:54.070383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.522 [2024-11-04 16:12:54.070404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:35.522 [2024-11-04 16:12:54.070417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:35.522 [2024-11-04 16:12:54.070431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.522 [2024-11-04 16:12:54.070477] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.522 [2024-11-04 16:12:54.075555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.522 [2024-11-04 16:12:54.075588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.522 [2024-11-04 16:12:54.075606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.098 ms 00:18:35.522 [2024-11-04 16:12:54.075621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.522 [2024-11-04 16:12:54.075692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.522 [2024-11-04 16:12:54.075710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:35.522 [2024-11-04 16:12:54.075726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:35.522 [2024-11-04 16:12:54.075738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.522 [2024-11-04 16:12:54.075802] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:35.523 [2024-11-04 16:12:54.075965] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:35.523 [2024-11-04 16:12:54.075994] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:35.523 [2024-11-04 16:12:54.076011] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:35.523 [2024-11-04 16:12:54.076029] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:35.523 [2024-11-04 16:12:54.076044] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:35.523 [2024-11-04 16:12:54.076060] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:35.523 [2024-11-04 16:12:54.076072] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:35.523 [2024-11-04 16:12:54.076087] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:35.523 [2024-11-04 16:12:54.076110] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:35.523 [2024-11-04 16:12:54.076128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.523 [2024-11-04 16:12:54.076144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:35.523 [2024-11-04 16:12:54.076159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:18:35.523 [2024-11-04 16:12:54.076172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.523 [2024-11-04 16:12:54.076257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.523 [2024-11-04 16:12:54.076270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:35.523 [2024-11-04 16:12:54.076285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:35.523 [2024-11-04 16:12:54.076298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.523 [2024-11-04 16:12:54.076406] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:35.523 [2024-11-04 16:12:54.076420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:35.523 [2024-11-04 16:12:54.076439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.523 [2024-11-04 16:12:54.076452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.523 [2024-11-04 16:12:54.076467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:35.523 [2024-11-04 16:12:54.076479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:35.523 [2024-11-04 16:12:54.076493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:35.523 [2024-11-04 16:12:54.076504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:35.523 [2024-11-04 16:12:54.076519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:35.523 [2024-11-04 16:12:54.076531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.523 [2024-11-04 16:12:54.076544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:35.523 [2024-11-04 16:12:54.076557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:35.523 [2024-11-04 16:12:54.076572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.523 [2024-11-04 16:12:54.076583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:35.523 [2024-11-04 16:12:54.076599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:35.523 [2024-11-04 16:12:54.076610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.523 [2024-11-04 16:12:54.076626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:35.523 [2024-11-04 16:12:54.076637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:35.523 [2024-11-04 16:12:54.076651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.523 [2024-11-04 16:12:54.076662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:35.523 [2024-11-04 16:12:54.076676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:35.523 [2024-11-04 16:12:54.076687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.523 [2024-11-04 16:12:54.076701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:35.523 [2024-11-04 16:12:54.076712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:35.523 [2024-11-04 16:12:54.076726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.523 [2024-11-04 16:12:54.076737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:35.523 [2024-11-04 16:12:54.076761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:35.523 [2024-11-04 16:12:54.076774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.523 [2024-11-04 16:12:54.076788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:35.523 [2024-11-04 16:12:54.076799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:35.523 [2024-11-04 16:12:54.076813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.523 [2024-11-04 16:12:54.076824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:35.523 [2024-11-04 16:12:54.076840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:35.523 [2024-11-04 16:12:54.076852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.523 [2024-11-04 16:12:54.076866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:35.523 [2024-11-04 16:12:54.076891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:35.523 [2024-11-04 16:12:54.076923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.523 [2024-11-04 16:12:54.076934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:35.523 [2024-11-04 16:12:54.076950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:35.523 [2024-11-04 16:12:54.076962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.523 [2024-11-04 16:12:54.076975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:35.523 [2024-11-04 16:12:54.076986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:35.523 [2024-11-04 16:12:54.077000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.523 [2024-11-04 16:12:54.077011] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:35.523 [2024-11-04 16:12:54.077026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:35.523 [2024-11-04 16:12:54.077037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.523 [2024-11-04 16:12:54.077052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.523 [2024-11-04 16:12:54.077064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:35.523 [2024-11-04 16:12:54.077081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:35.523 [2024-11-04 16:12:54.077095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:35.523 [2024-11-04 16:12:54.077110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:35.523 [2024-11-04 16:12:54.077121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:35.523 [2024-11-04 16:12:54.077134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:35.523 [2024-11-04 16:12:54.077151] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:35.523 [2024-11-04 16:12:54.077169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.523 [2024-11-04 16:12:54.077182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:35.523 [2024-11-04 16:12:54.077214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:35.523 [2024-11-04 16:12:54.077227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:35.523 [2024-11-04 16:12:54.077242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:35.523 [2024-11-04 16:12:54.077255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:35.523 [2024-11-04 16:12:54.077271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:35.523 [2024-11-04 16:12:54.077284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:35.523 [2024-11-04 16:12:54.077299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:35.523 [2024-11-04 16:12:54.077311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:35.523 [2024-11-04 16:12:54.077330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:35.523 [2024-11-04 16:12:54.077343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:35.523 [2024-11-04 16:12:54.077357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:35.523 [2024-11-04 16:12:54.077370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:35.523 [2024-11-04 16:12:54.077385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:35.523 [2024-11-04 16:12:54.077397] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:35.523 [2024-11-04 16:12:54.077418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.523 [2024-11-04 16:12:54.077434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:35.523 [2024-11-04 16:12:54.077450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:35.523 [2024-11-04 16:12:54.077463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:35.523 [2024-11-04 16:12:54.077479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:35.523 [2024-11-04 16:12:54.077498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.523 [2024-11-04 16:12:54.077513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:35.523 [2024-11-04 16:12:54.077526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.150 ms 00:18:35.523 [2024-11-04 16:12:54.077541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.524 [2024-11-04 16:12:54.077608] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:35.524 [2024-11-04 16:12:54.077628] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:42.097 [2024-11-04 16:13:00.421985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.097 [2024-11-04 16:13:00.422058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:42.097 [2024-11-04 16:13:00.422082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6354.674 ms 00:18:42.097 [2024-11-04 16:13:00.422098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-11-04 16:13:00.458350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.097 [2024-11-04 16:13:00.458408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:42.097 [2024-11-04 16:13:00.458426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.005 ms 00:18:42.097 [2024-11-04 16:13:00.458443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-11-04 16:13:00.458613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.097 [2024-11-04 16:13:00.458633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:42.097 [2024-11-04 16:13:00.458647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:42.097 [2024-11-04 16:13:00.458666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-11-04 16:13:00.509774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.097 [2024-11-04 16:13:00.509824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:42.097 [2024-11-04 16:13:00.509846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.111 ms 00:18:42.097 [2024-11-04 16:13:00.509862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-11-04 16:13:00.509907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.097 [2024-11-04 16:13:00.509923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:42.097 [2024-11-04 16:13:00.509936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:42.097 [2024-11-04 16:13:00.509952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-11-04 16:13:00.510448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.097 [2024-11-04 16:13:00.510480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:42.097 [2024-11-04 16:13:00.510510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:18:42.097 [2024-11-04 16:13:00.510530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-11-04 16:13:00.510660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.097 [2024-11-04 16:13:00.510683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:42.097 [2024-11-04 16:13:00.510697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:42.097 [2024-11-04 16:13:00.510716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-11-04 16:13:00.531964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.097 [2024-11-04 16:13:00.532011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:42.097 [2024-11-04 16:13:00.532028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.227 ms 00:18:42.097 [2024-11-04 16:13:00.532044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-11-04 16:13:00.545270] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:42.097 [2024-11-04 16:13:00.562102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.097 [2024-11-04 16:13:00.562171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:42.097 [2024-11-04 16:13:00.562193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.987 ms 00:18:42.097 [2024-11-04 16:13:00.562209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-11-04 16:13:00.661362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.097 [2024-11-04 16:13:00.661421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:42.097 [2024-11-04 16:13:00.661449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.249 ms 00:18:42.097 [2024-11-04 16:13:00.661462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-11-04 16:13:00.661670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.097 [2024-11-04 16:13:00.661690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:42.097 [2024-11-04 16:13:00.661711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:18:42.097 [2024-11-04 16:13:00.661724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.097 [2024-11-04 16:13:00.698591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.097 [2024-11-04 16:13:00.698780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:42.097 [2024-11-04 16:13:00.698813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.839 ms 00:18:42.097 [2024-11-04 16:13:00.698827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.098 [2024-11-04 16:13:00.734175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.098 [2024-11-04 16:13:00.734213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:42.098 [2024-11-04 16:13:00.734234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.348 ms 00:18:42.098 [2024-11-04 16:13:00.734246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.098 [2024-11-04 16:13:00.734991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.098 [2024-11-04 16:13:00.735015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:42.098 [2024-11-04 16:13:00.735033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:18:42.098 [2024-11-04 16:13:00.735045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.357 [2024-11-04 16:13:00.836298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.357 [2024-11-04 16:13:00.836344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:42.357 [2024-11-04 16:13:00.836369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.338 ms 00:18:42.357 [2024-11-04 16:13:00.836386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.357 [2024-11-04 16:13:00.874199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.357 [2024-11-04 16:13:00.874240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:42.357 [2024-11-04 16:13:00.874261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.759 ms 00:18:42.357 [2024-11-04 16:13:00.874275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.357 [2024-11-04 16:13:00.909889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.357 [2024-11-04 16:13:00.909927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:42.357 [2024-11-04 16:13:00.909947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.611 ms 00:18:42.357 [2024-11-04 16:13:00.909960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.357 [2024-11-04 16:13:00.946765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.357 [2024-11-04 16:13:00.946803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:42.357 [2024-11-04 16:13:00.946823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.806 ms 00:18:42.357 [2024-11-04 16:13:00.946835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.357 [2024-11-04 16:13:00.946895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.357 [2024-11-04 16:13:00.946909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:42.357 [2024-11-04 16:13:00.946927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:42.357 [2024-11-04 16:13:00.946943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.357 [2024-11-04 16:13:00.947090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:42.357 [2024-11-04 16:13:00.947111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:42.357 [2024-11-04 16:13:00.947128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:42.357 [2024-11-04 16:13:00.947140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:42.357 [2024-11-04 16:13:00.948614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6900.257 ms, result 0 00:18:42.357 { 00:18:42.357 "name": "ftl0", 00:18:42.357 "uuid": "2b93457f-9ff3-4cfa-b867-1e04882f0a55" 00:18:42.357 } 00:18:42.357 16:13:00 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:42.357 16:13:00 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:18:42.357 16:13:00 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:42.357 16:13:00 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:18:42.357 16:13:00 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:42.357 16:13:00 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:42.357 16:13:00 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:42.616 16:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:42.875 [ 00:18:42.875 { 00:18:42.875 "name": "ftl0", 00:18:42.875 "aliases": [ 00:18:42.875 "2b93457f-9ff3-4cfa-b867-1e04882f0a55" 00:18:42.875 ], 00:18:42.875 "product_name": "FTL disk", 00:18:42.875 "block_size": 4096, 00:18:42.875 "num_blocks": 20971520, 00:18:42.875 "uuid": "2b93457f-9ff3-4cfa-b867-1e04882f0a55", 00:18:42.875 "assigned_rate_limits": { 00:18:42.875 "rw_ios_per_sec": 0, 00:18:42.875 "rw_mbytes_per_sec": 0, 00:18:42.875 "r_mbytes_per_sec": 0, 00:18:42.875 "w_mbytes_per_sec": 0 00:18:42.875 }, 00:18:42.875 "claimed": false, 00:18:42.875 "zoned": false, 00:18:42.875 "supported_io_types": { 00:18:42.875 "read": true, 00:18:42.875 "write": true, 00:18:42.875 "unmap": true, 00:18:42.875 "flush": true, 00:18:42.875 "reset": false, 00:18:42.875 "nvme_admin": false, 00:18:42.875 "nvme_io": false, 00:18:42.875 "nvme_io_md": false, 00:18:42.875 "write_zeroes": true, 00:18:42.875 "zcopy": false, 00:18:42.875 "get_zone_info": false, 00:18:42.875 "zone_management": false, 00:18:42.875 "zone_append": false, 00:18:42.875 "compare": false, 00:18:42.875 "compare_and_write": false, 00:18:42.875 "abort": false, 00:18:42.875 "seek_hole": false, 00:18:42.875 "seek_data": false, 00:18:42.875 "copy": false, 00:18:42.875 "nvme_iov_md": false 00:18:42.875 }, 00:18:42.875 "driver_specific": { 00:18:42.875 "ftl": { 00:18:42.875 "base_bdev": "45a44274-716a-4b51-8346-8b6317f967ac", 00:18:42.875 "cache": "nvc0n1p0" 00:18:42.875 } 00:18:42.875 } 00:18:42.875 } 00:18:42.875 ] 00:18:42.875 16:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:18:42.875 16:13:01 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:42.875 16:13:01 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:43.134 16:13:01 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:43.134 16:13:01 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:43.393 [2024-11-04 16:13:01.859544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.393 [2024-11-04 16:13:01.859598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:43.393 [2024-11-04 16:13:01.859616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:43.393 [2024-11-04 16:13:01.859632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.393 [2024-11-04 16:13:01.859671] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:43.393 [2024-11-04 16:13:01.864171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.393 [2024-11-04 16:13:01.864340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:43.393 [2024-11-04 16:13:01.864373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.478 ms 00:18:43.393 [2024-11-04 16:13:01.864386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.393 [2024-11-04 16:13:01.864865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.393 [2024-11-04 16:13:01.864887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:43.393 [2024-11-04 16:13:01.864906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:18:43.393 [2024-11-04 16:13:01.864918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.393 [2024-11-04 16:13:01.867424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.393 [2024-11-04 16:13:01.867574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:43.393 [2024-11-04 16:13:01.867602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.480 ms 00:18:43.393 [2024-11-04 16:13:01.867615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.393 [2024-11-04 16:13:01.872656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.393 [2024-11-04 16:13:01.872690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:43.393 [2024-11-04 16:13:01.872708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.004 ms 00:18:43.393 [2024-11-04 16:13:01.872721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.393 [2024-11-04 16:13:01.910602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.393 [2024-11-04 16:13:01.910643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:43.393 [2024-11-04 16:13:01.910664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.823 ms 00:18:43.393 [2024-11-04 16:13:01.910677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.393 [2024-11-04 16:13:01.933409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.393 [2024-11-04 16:13:01.933448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:43.393 [2024-11-04 16:13:01.933467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.689 ms 00:18:43.393 [2024-11-04 16:13:01.933484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.394 [2024-11-04 16:13:01.933682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.394 [2024-11-04 16:13:01.933702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:43.394 [2024-11-04 16:13:01.933719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:18:43.394 [2024-11-04 16:13:01.933731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.394 [2024-11-04 16:13:01.970275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.394 [2024-11-04 16:13:01.970312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:43.394 [2024-11-04 16:13:01.970332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.559 ms 00:18:43.394 [2024-11-04 16:13:01.970344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.394 [2024-11-04 16:13:02.005931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.394 [2024-11-04 16:13:02.006102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:43.394 [2024-11-04 16:13:02.006132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.587 ms 00:18:43.394 [2024-11-04 16:13:02.006144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.394 [2024-11-04 16:13:02.041502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.394 [2024-11-04 16:13:02.041669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:43.394 [2024-11-04 16:13:02.041700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.351 ms 00:18:43.394 [2024-11-04 16:13:02.041712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.394 [2024-11-04 16:13:02.077084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.394 [2024-11-04 16:13:02.077132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:43.394 [2024-11-04 16:13:02.077153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.280 ms 00:18:43.394 [2024-11-04 16:13:02.077164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.394 [2024-11-04 16:13:02.077234] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:43.394 [2024-11-04 16:13:02.077255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.077995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:43.394 [2024-11-04 16:13:02.078327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:43.395 [2024-11-04 16:13:02.078794] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:43.395 [2024-11-04 16:13:02.078810] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2b93457f-9ff3-4cfa-b867-1e04882f0a55 00:18:43.395 [2024-11-04 16:13:02.078823] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:43.395 [2024-11-04 16:13:02.078841] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:43.395 [2024-11-04 16:13:02.078853] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:43.395 [2024-11-04 16:13:02.078872] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:43.395 [2024-11-04 16:13:02.078884] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:43.395 [2024-11-04 16:13:02.078899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:43.395 [2024-11-04 16:13:02.078912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:43.395 [2024-11-04 16:13:02.078926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:43.395 [2024-11-04 16:13:02.078936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:43.395 [2024-11-04 16:13:02.078952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.395 [2024-11-04 16:13:02.078964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:43.395 [2024-11-04 16:13:02.078980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.739 ms 00:18:43.395 [2024-11-04 16:13:02.078992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.395 [2024-11-04 16:13:02.098830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.395 [2024-11-04 16:13:02.098997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:43.395 [2024-11-04 16:13:02.099027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.782 ms 00:18:43.395 [2024-11-04 16:13:02.099040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.395 [2024-11-04 16:13:02.099607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.395 [2024-11-04 16:13:02.099620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:43.395 [2024-11-04 16:13:02.099636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:18:43.395 [2024-11-04 16:13:02.099648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.654 [2024-11-04 16:13:02.167347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.654 [2024-11-04 16:13:02.167393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.654 [2024-11-04 16:13:02.167412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.654 [2024-11-04 16:13:02.167441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.654 [2024-11-04 16:13:02.167512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.654 [2024-11-04 16:13:02.167525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.654 [2024-11-04 16:13:02.167541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.654 [2024-11-04 16:13:02.167553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.654 [2024-11-04 16:13:02.167662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.654 [2024-11-04 16:13:02.167679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.654 [2024-11-04 16:13:02.167699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.654 [2024-11-04 16:13:02.167712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.654 [2024-11-04 16:13:02.167781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.654 [2024-11-04 16:13:02.167795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.654 [2024-11-04 16:13:02.167811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.654 [2024-11-04 16:13:02.167823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.654 [2024-11-04 16:13:02.299340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.654 [2024-11-04 16:13:02.299407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.654 [2024-11-04 16:13:02.299442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.654 [2024-11-04 16:13:02.299455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.913 [2024-11-04 16:13:02.398054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.913 [2024-11-04 16:13:02.398114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.913 [2024-11-04 16:13:02.398134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.913 [2024-11-04 16:13:02.398147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.913 [2024-11-04 16:13:02.398277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.913 [2024-11-04 16:13:02.398292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.913 [2024-11-04 16:13:02.398307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.913 [2024-11-04 16:13:02.398323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.913 [2024-11-04 16:13:02.398404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.913 [2024-11-04 16:13:02.398417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.913 [2024-11-04 16:13:02.398432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.913 [2024-11-04 16:13:02.398444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.913 [2024-11-04 16:13:02.398603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.913 [2024-11-04 16:13:02.398620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.913 [2024-11-04 16:13:02.398635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.913 [2024-11-04 16:13:02.398648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.913 [2024-11-04 16:13:02.398717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.913 [2024-11-04 16:13:02.398731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:43.913 [2024-11-04 16:13:02.398767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.913 [2024-11-04 16:13:02.398780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.913 [2024-11-04 16:13:02.398851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.913 [2024-11-04 16:13:02.398865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.913 [2024-11-04 16:13:02.398880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.913 [2024-11-04 16:13:02.398893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.913 [2024-11-04 16:13:02.398961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.913 [2024-11-04 16:13:02.398975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.913 [2024-11-04 16:13:02.398990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.913 [2024-11-04 16:13:02.399002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.913 [2024-11-04 16:13:02.399181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 540.481 ms, result 0 00:18:43.913 true 00:18:43.913 16:13:02 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74132 00:18:43.913 16:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 74132 ']' 00:18:43.913 16:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 74132 00:18:43.913 16:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:18:43.913 16:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:43.913 16:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74132 00:18:43.913 killing process with pid 74132 00:18:43.913 16:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:43.913 16:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:43.914 16:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74132' 00:18:43.914 16:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 74132 00:18:43.914 16:13:02 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 74132 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:49.246 16:13:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:49.246 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:49.246 fio-3.35 00:18:49.246 Starting 1 thread 00:18:54.524 00:18:54.524 test: (groupid=0, jobs=1): err= 0: pid=74377: Mon Nov 4 16:13:13 2024 00:18:54.524 read: IOPS=878, BW=58.3MiB/s (61.2MB/s)(255MiB/4364msec) 00:18:54.524 slat (nsec): min=4346, max=25332, avg=5828.37, stdev=2118.72 00:18:54.524 clat (usec): min=331, max=16840, avg=512.52, stdev=427.94 00:18:54.524 lat (usec): min=336, max=16845, avg=518.35, stdev=427.98 00:18:54.524 clat percentiles (usec): 00:18:54.524 | 1.00th=[ 392], 5.00th=[ 408], 10.00th=[ 441], 20.00th=[ 465], 00:18:54.524 | 30.00th=[ 478], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 523], 00:18:54.524 | 70.00th=[ 537], 80.00th=[ 545], 90.00th=[ 553], 95.00th=[ 562], 00:18:54.524 | 99.00th=[ 627], 99.50th=[ 685], 99.90th=[ 1004], 99.95th=[16188], 00:18:54.524 | 99.99th=[16909] 00:18:54.524 write: IOPS=884, BW=58.7MiB/s (61.6MB/s)(256MiB/4359msec); 0 zone resets 00:18:54.524 slat (nsec): min=15547, max=68795, avg=19218.41, stdev=4215.42 00:18:54.524 clat (usec): min=369, max=23462, avg=586.24, stdev=608.74 00:18:54.524 lat (usec): min=392, max=23488, avg=605.46, stdev=608.76 00:18:54.524 clat percentiles (usec): 00:18:54.524 | 1.00th=[ 424], 5.00th=[ 478], 10.00th=[ 486], 20.00th=[ 502], 00:18:54.524 | 30.00th=[ 537], 40.00th=[ 553], 50.00th=[ 562], 60.00th=[ 570], 00:18:54.524 | 70.00th=[ 578], 80.00th=[ 594], 90.00th=[ 635], 95.00th=[ 644], 00:18:54.524 | 99.00th=[ 898], 99.50th=[ 963], 99.90th=[12518], 99.95th=[14353], 00:18:54.524 | 99.99th=[23462] 00:18:54.524 bw ( KiB/s): min=54672, max=65008, per=99.45%, avg=59823.00, stdev=3316.55, samples=8 00:18:54.524 iops : min= 804, max= 956, avg=879.75, stdev=48.77, samples=8 00:18:54.524 lat (usec) : 500=37.72%, 750=61.04%, 1000=1.05% 00:18:54.524 lat (msec) : 2=0.05%, 10=0.03%, 20=0.10%, 50=0.01% 00:18:54.524 cpu : usr=99.34%, sys=0.11%, ctx=8, majf=0, minf=1169 00:18:54.524 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.524 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.524 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:54.524 00:18:54.524 Run status group 0 (all jobs): 00:18:54.524 READ: bw=58.3MiB/s (61.2MB/s), 58.3MiB/s-58.3MiB/s (61.2MB/s-61.2MB/s), io=255MiB (267MB), run=4364-4364msec 00:18:54.524 WRITE: bw=58.7MiB/s (61.6MB/s), 58.7MiB/s-58.7MiB/s (61.6MB/s-61.6MB/s), io=256MiB (269MB), run=4359-4359msec 00:18:56.429 ----------------------------------------------------- 00:18:56.429 Suppressions used: 00:18:56.429 count bytes template 00:18:56.429 1 5 /usr/src/fio/parse.c 00:18:56.429 1 8 libtcmalloc_minimal.so 00:18:56.429 1 904 libcrypto.so 00:18:56.429 ----------------------------------------------------- 00:18:56.429 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:56.429 16:13:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:56.687 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:56.687 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:56.687 fio-3.35 00:18:56.687 Starting 2 threads 00:19:23.233 00:19:23.233 first_half: (groupid=0, jobs=1): err= 0: pid=74484: Mon Nov 4 16:13:39 2024 00:19:23.233 read: IOPS=2917, BW=11.4MiB/s (11.9MB/s)(256MiB/22441msec) 00:19:23.233 slat (nsec): min=3395, max=53486, avg=5674.72, stdev=1666.81 00:19:23.233 clat (usec): min=675, max=248068, avg=36837.55, stdev=23241.44 00:19:23.233 lat (usec): min=679, max=248074, avg=36843.23, stdev=23241.76 00:19:23.233 clat percentiles (msec): 00:19:23.233 | 1.00th=[ 8], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:19:23.233 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:19:23.233 | 70.00th=[ 33], 80.00th=[ 37], 90.00th=[ 38], 95.00th=[ 75], 00:19:23.233 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 192], 99.95th=[ 215], 00:19:23.233 | 99.99th=[ 243] 00:19:23.233 write: IOPS=2924, BW=11.4MiB/s (12.0MB/s)(256MiB/22411msec); 0 zone resets 00:19:23.233 slat (usec): min=4, max=520, avg= 7.06, stdev= 5.63 00:19:23.233 clat (usec): min=358, max=77979, avg=6998.62, stdev=6537.87 00:19:23.233 lat (usec): min=365, max=77986, avg=7005.68, stdev=6537.92 00:19:23.233 clat percentiles (usec): 00:19:23.233 | 1.00th=[ 1029], 5.00th=[ 1303], 10.00th=[ 1680], 20.00th=[ 3130], 00:19:23.233 | 30.00th=[ 4178], 40.00th=[ 5145], 50.00th=[ 5800], 60.00th=[ 6390], 00:19:23.233 | 70.00th=[ 6980], 80.00th=[ 8160], 90.00th=[11994], 95.00th=[19006], 00:19:23.233 | 99.00th=[34866], 99.50th=[36439], 99.90th=[57934], 99.95th=[68682], 00:19:23.233 | 99.99th=[76022] 00:19:23.233 bw ( KiB/s): min= 256, max=50320, per=100.00%, avg=24798.10, stdev=14676.75, samples=21 00:19:23.233 iops : min= 64, max=12580, avg=6199.52, stdev=3669.19, samples=21 00:19:23.233 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.36% 00:19:23.233 lat (msec) : 2=5.94%, 4=7.86%, 10=29.80%, 20=5.05%, 50=47.50% 00:19:23.233 lat (msec) : 100=1.59%, 250=1.86% 00:19:23.233 cpu : usr=99.17%, sys=0.25%, ctx=39, majf=0, minf=5552 00:19:23.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:23.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.233 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.233 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.233 second_half: (groupid=0, jobs=1): err= 0: pid=74485: Mon Nov 4 16:13:39 2024 00:19:23.233 read: IOPS=2941, BW=11.5MiB/s (12.0MB/s)(256MiB/22263msec) 00:19:23.233 slat (nsec): min=3421, max=34260, avg=5737.84, stdev=1705.07 00:19:23.233 clat (msec): min=9, max=200, avg=37.23, stdev=21.25 00:19:23.233 lat (msec): min=9, max=200, avg=37.24, stdev=21.25 00:19:23.233 clat percentiles (msec): 00:19:23.233 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:19:23.233 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:19:23.233 | 70.00th=[ 33], 80.00th=[ 37], 90.00th=[ 39], 95.00th=[ 68], 00:19:23.233 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 184], 99.95th=[ 190], 00:19:23.233 | 99.99th=[ 197] 00:19:23.233 write: IOPS=2959, BW=11.6MiB/s (12.1MB/s)(256MiB/22142msec); 0 zone resets 00:19:23.233 slat (usec): min=4, max=557, avg= 7.04, stdev= 6.02 00:19:23.233 clat (usec): min=417, max=36392, avg=6256.75, stdev=3682.87 00:19:23.233 lat (usec): min=426, max=36398, avg=6263.79, stdev=3683.01 00:19:23.233 clat percentiles (usec): 00:19:23.233 | 1.00th=[ 1221], 5.00th=[ 1926], 10.00th=[ 2507], 20.00th=[ 3589], 00:19:23.233 | 30.00th=[ 4555], 40.00th=[ 5145], 50.00th=[ 5735], 60.00th=[ 6194], 00:19:23.233 | 70.00th=[ 6783], 80.00th=[ 7832], 90.00th=[11076], 95.00th=[12387], 00:19:23.233 | 99.00th=[21627], 99.50th=[27132], 99.90th=[32637], 99.95th=[34866], 00:19:23.233 | 99.99th=[35914] 00:19:23.233 bw ( KiB/s): min= 5008, max=41016, per=100.00%, avg=24869.33, stdev=11059.29, samples=21 00:19:23.233 iops : min= 1252, max=10254, avg=6217.33, stdev=2764.82, samples=21 00:19:23.233 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.15% 00:19:23.233 lat (msec) : 2=2.59%, 4=9.62%, 10=31.04%, 20=6.01%, 50=47.16% 00:19:23.233 lat (msec) : 100=1.69%, 250=1.69% 00:19:23.233 cpu : usr=99.09%, sys=0.28%, ctx=137, majf=0, minf=5563 00:19:23.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:23.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.233 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.233 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.233 00:19:23.233 Run status group 0 (all jobs): 00:19:23.233 READ: bw=22.8MiB/s (23.9MB/s), 11.4MiB/s-11.5MiB/s (11.9MB/s-12.0MB/s), io=512MiB (536MB), run=22263-22441msec 00:19:23.233 WRITE: bw=22.8MiB/s (24.0MB/s), 11.4MiB/s-11.6MiB/s (12.0MB/s-12.1MB/s), io=512MiB (537MB), run=22142-22411msec 00:19:23.233 ----------------------------------------------------- 00:19:23.233 Suppressions used: 00:19:23.233 count bytes template 00:19:23.233 2 10 /usr/src/fio/parse.c 00:19:23.233 4 384 /usr/src/fio/iolog.c 00:19:23.233 1 8 libtcmalloc_minimal.so 00:19:23.233 1 904 libcrypto.so 00:19:23.233 ----------------------------------------------------- 00:19:23.233 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:23.233 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:19:23.234 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:23.234 16:13:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:23.491 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:23.491 fio-3.35 00:19:23.491 Starting 1 thread 00:19:38.376 00:19:38.376 test: (groupid=0, jobs=1): err= 0: pid=74788: Mon Nov 4 16:13:56 2024 00:19:38.376 read: IOPS=8122, BW=31.7MiB/s (33.3MB/s)(255MiB/8027msec) 00:19:38.376 slat (nsec): min=3275, max=44557, avg=4919.86, stdev=1589.64 00:19:38.376 clat (usec): min=583, max=31300, avg=15749.33, stdev=1028.83 00:19:38.376 lat (usec): min=591, max=31307, avg=15754.25, stdev=1028.82 00:19:38.376 clat percentiles (usec): 00:19:38.376 | 1.00th=[14746], 5.00th=[15008], 10.00th=[15139], 20.00th=[15270], 00:19:38.376 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15664], 60.00th=[15795], 00:19:38.376 | 70.00th=[15926], 80.00th=[16057], 90.00th=[16319], 95.00th=[16581], 00:19:38.376 | 99.00th=[19530], 99.50th=[22152], 99.90th=[27132], 99.95th=[27919], 00:19:38.376 | 99.99th=[30540] 00:19:38.376 write: IOPS=12.2k, BW=47.5MiB/s (49.8MB/s)(256MiB/5391msec); 0 zone resets 00:19:38.376 slat (usec): min=4, max=1426, avg= 7.87, stdev= 8.83 00:19:38.376 clat (usec): min=606, max=59345, avg=10480.38, stdev=12218.65 00:19:38.376 lat (usec): min=633, max=59351, avg=10488.25, stdev=12218.64 00:19:38.376 clat percentiles (usec): 00:19:38.376 | 1.00th=[ 930], 5.00th=[ 1139], 10.00th=[ 1287], 20.00th=[ 1500], 00:19:38.376 | 30.00th=[ 1680], 40.00th=[ 2343], 50.00th=[ 7242], 60.00th=[ 8455], 00:19:38.376 | 70.00th=[10290], 80.00th=[13698], 90.00th=[34866], 95.00th=[36439], 00:19:38.376 | 99.00th=[47973], 99.50th=[52167], 99.90th=[56361], 99.95th=[57410], 00:19:38.376 | 99.99th=[58459] 00:19:38.376 bw ( KiB/s): min=35688, max=60488, per=98.02%, avg=47662.55, stdev=7417.84, samples=11 00:19:38.376 iops : min= 8922, max=15122, avg=11915.64, stdev=1854.46, samples=11 00:19:38.376 lat (usec) : 750=0.05%, 1000=0.88% 00:19:38.376 lat (msec) : 2=18.24%, 4=1.94%, 10=13.42%, 20=57.11%, 50=7.99% 00:19:38.376 lat (msec) : 100=0.37% 00:19:38.376 cpu : usr=98.48%, sys=0.66%, ctx=23, majf=0, minf=5565 00:19:38.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:38.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.376 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:38.376 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:38.376 00:19:38.376 Run status group 0 (all jobs): 00:19:38.376 READ: bw=31.7MiB/s (33.3MB/s), 31.7MiB/s-31.7MiB/s (33.3MB/s-33.3MB/s), io=255MiB (267MB), run=8027-8027msec 00:19:38.376 WRITE: bw=47.5MiB/s (49.8MB/s), 47.5MiB/s-47.5MiB/s (49.8MB/s-49.8MB/s), io=256MiB (268MB), run=5391-5391msec 00:19:40.279 ----------------------------------------------------- 00:19:40.279 Suppressions used: 00:19:40.279 count bytes template 00:19:40.279 1 5 /usr/src/fio/parse.c 00:19:40.279 2 192 /usr/src/fio/iolog.c 00:19:40.279 1 8 libtcmalloc_minimal.so 00:19:40.279 1 904 libcrypto.so 00:19:40.279 ----------------------------------------------------- 00:19:40.279 00:19:40.279 16:13:58 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:40.279 16:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.279 16:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:40.279 16:13:58 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:40.279 Remove shared memory files 00:19:40.279 16:13:58 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:40.279 16:13:58 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:40.279 16:13:58 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:40.279 16:13:58 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:40.279 16:13:58 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57783 /dev/shm/spdk_tgt_trace.pid73027 00:19:40.279 16:13:58 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:40.279 16:13:58 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:40.279 00:19:40.279 ************************************ 00:19:40.279 END TEST ftl_fio_basic 00:19:40.279 ************************************ 00:19:40.279 real 1m9.041s 00:19:40.279 user 2m29.717s 00:19:40.279 sys 0m3.936s 00:19:40.279 16:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:40.279 16:13:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:40.279 16:13:58 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:40.279 16:13:58 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:40.280 16:13:58 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:40.280 16:13:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:40.280 ************************************ 00:19:40.280 START TEST ftl_bdevperf 00:19:40.280 ************************************ 00:19:40.280 16:13:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:40.538 * Looking for test storage... 00:19:40.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:40.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.538 --rc genhtml_branch_coverage=1 00:19:40.538 --rc genhtml_function_coverage=1 00:19:40.538 --rc genhtml_legend=1 00:19:40.538 --rc geninfo_all_blocks=1 00:19:40.538 --rc geninfo_unexecuted_blocks=1 00:19:40.538 00:19:40.538 ' 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:40.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.538 --rc genhtml_branch_coverage=1 00:19:40.538 --rc genhtml_function_coverage=1 00:19:40.538 --rc genhtml_legend=1 00:19:40.538 --rc geninfo_all_blocks=1 00:19:40.538 --rc geninfo_unexecuted_blocks=1 00:19:40.538 00:19:40.538 ' 00:19:40.538 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:40.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.538 --rc genhtml_branch_coverage=1 00:19:40.538 --rc genhtml_function_coverage=1 00:19:40.538 --rc genhtml_legend=1 00:19:40.538 --rc geninfo_all_blocks=1 00:19:40.538 --rc geninfo_unexecuted_blocks=1 00:19:40.538 00:19:40.538 ' 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:40.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.539 --rc genhtml_branch_coverage=1 00:19:40.539 --rc genhtml_function_coverage=1 00:19:40.539 --rc genhtml_legend=1 00:19:40.539 --rc geninfo_all_blocks=1 00:19:40.539 --rc geninfo_unexecuted_blocks=1 00:19:40.539 00:19:40.539 ' 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75032 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:40.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75032 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 75032 ']' 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:40.539 16:13:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:40.797 [2024-11-04 16:13:59.269818] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:19:40.797 [2024-11-04 16:13:59.270092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75032 ] 00:19:40.797 [2024-11-04 16:13:59.441978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.056 [2024-11-04 16:13:59.577863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.623 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:41.623 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:19:41.623 16:14:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:41.623 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:41.623 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:41.623 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:41.623 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:41.623 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:41.881 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:41.881 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:41.881 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:41.881 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:19:41.881 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:41.881 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:41.881 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:41.881 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:41.881 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:41.881 { 00:19:41.881 "name": "nvme0n1", 00:19:41.881 "aliases": [ 00:19:41.881 "0bff117f-f3cc-4ac2-8058-564742c174b0" 00:19:41.881 ], 00:19:41.881 "product_name": "NVMe disk", 00:19:41.881 "block_size": 4096, 00:19:41.881 "num_blocks": 1310720, 00:19:41.881 "uuid": "0bff117f-f3cc-4ac2-8058-564742c174b0", 00:19:41.881 "numa_id": -1, 00:19:41.881 "assigned_rate_limits": { 00:19:41.881 "rw_ios_per_sec": 0, 00:19:41.881 "rw_mbytes_per_sec": 0, 00:19:41.881 "r_mbytes_per_sec": 0, 00:19:41.881 "w_mbytes_per_sec": 0 00:19:41.881 }, 00:19:41.881 "claimed": true, 00:19:41.881 "claim_type": "read_many_write_one", 00:19:41.881 "zoned": false, 00:19:41.881 "supported_io_types": { 00:19:41.881 "read": true, 00:19:41.881 "write": true, 00:19:41.881 "unmap": true, 00:19:41.881 "flush": true, 00:19:41.881 "reset": true, 00:19:41.881 "nvme_admin": true, 00:19:41.881 "nvme_io": true, 00:19:41.881 "nvme_io_md": false, 00:19:41.881 "write_zeroes": true, 00:19:41.881 "zcopy": false, 00:19:41.881 "get_zone_info": false, 00:19:41.881 "zone_management": false, 00:19:41.881 "zone_append": false, 00:19:41.881 "compare": true, 00:19:41.881 "compare_and_write": false, 00:19:41.881 "abort": true, 00:19:41.881 "seek_hole": false, 00:19:41.881 "seek_data": false, 00:19:41.881 "copy": true, 00:19:41.881 "nvme_iov_md": false 00:19:41.881 }, 00:19:41.881 "driver_specific": { 00:19:41.881 "nvme": [ 00:19:41.881 { 00:19:41.881 "pci_address": "0000:00:11.0", 00:19:41.881 "trid": { 00:19:41.881 "trtype": "PCIe", 00:19:41.881 "traddr": "0000:00:11.0" 00:19:41.881 }, 00:19:41.881 "ctrlr_data": { 00:19:41.881 "cntlid": 0, 00:19:41.881 "vendor_id": "0x1b36", 00:19:41.881 "model_number": "QEMU NVMe Ctrl", 00:19:41.881 "serial_number": "12341", 00:19:41.881 "firmware_revision": "8.0.0", 00:19:41.881 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:41.881 "oacs": { 00:19:41.881 "security": 0, 00:19:41.881 "format": 1, 00:19:41.881 "firmware": 0, 00:19:41.881 "ns_manage": 1 00:19:41.881 }, 00:19:41.881 "multi_ctrlr": false, 00:19:41.881 "ana_reporting": false 00:19:41.881 }, 00:19:41.881 "vs": { 00:19:41.881 "nvme_version": "1.4" 00:19:41.881 }, 00:19:41.881 "ns_data": { 00:19:41.881 "id": 1, 00:19:41.881 "can_share": false 00:19:41.881 } 00:19:41.881 } 00:19:41.881 ], 00:19:41.882 "mp_policy": "active_passive" 00:19:41.882 } 00:19:41.882 } 00:19:41.882 ]' 00:19:42.140 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:42.140 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:42.140 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:42.140 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:19:42.140 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:19:42.140 16:14:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:19:42.140 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:42.140 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:42.140 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:42.140 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:42.140 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:42.399 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=baa399dc-cd4b-4579-8dc9-b4cb372b2adb 00:19:42.399 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:42.399 16:14:00 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u baa399dc-cd4b-4579-8dc9-b4cb372b2adb 00:19:42.657 16:14:01 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:42.657 16:14:01 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=cecf5f1c-4843-4702-b54f-9059bb5698c6 00:19:42.657 16:14:01 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u cecf5f1c-4843-4702-b54f-9059bb5698c6 00:19:42.915 16:14:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=ae6e73b4-945f-4017-8a36-5950102e7791 00:19:42.915 16:14:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ae6e73b4-945f-4017-8a36-5950102e7791 00:19:42.915 16:14:01 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:42.915 16:14:01 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:42.915 16:14:01 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=ae6e73b4-945f-4017-8a36-5950102e7791 00:19:42.915 16:14:01 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:42.915 16:14:01 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size ae6e73b4-945f-4017-8a36-5950102e7791 00:19:42.915 16:14:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=ae6e73b4-945f-4017-8a36-5950102e7791 00:19:42.915 16:14:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:42.915 16:14:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:42.915 16:14:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:42.915 16:14:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ae6e73b4-945f-4017-8a36-5950102e7791 00:19:43.174 16:14:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:43.174 { 00:19:43.174 "name": "ae6e73b4-945f-4017-8a36-5950102e7791", 00:19:43.174 "aliases": [ 00:19:43.174 "lvs/nvme0n1p0" 00:19:43.174 ], 00:19:43.174 "product_name": "Logical Volume", 00:19:43.174 "block_size": 4096, 00:19:43.174 "num_blocks": 26476544, 00:19:43.174 "uuid": "ae6e73b4-945f-4017-8a36-5950102e7791", 00:19:43.174 "assigned_rate_limits": { 00:19:43.174 "rw_ios_per_sec": 0, 00:19:43.174 "rw_mbytes_per_sec": 0, 00:19:43.174 "r_mbytes_per_sec": 0, 00:19:43.174 "w_mbytes_per_sec": 0 00:19:43.174 }, 00:19:43.174 "claimed": false, 00:19:43.174 "zoned": false, 00:19:43.174 "supported_io_types": { 00:19:43.174 "read": true, 00:19:43.174 "write": true, 00:19:43.174 "unmap": true, 00:19:43.174 "flush": false, 00:19:43.174 "reset": true, 00:19:43.174 "nvme_admin": false, 00:19:43.174 "nvme_io": false, 00:19:43.174 "nvme_io_md": false, 00:19:43.174 "write_zeroes": true, 00:19:43.174 "zcopy": false, 00:19:43.174 "get_zone_info": false, 00:19:43.174 "zone_management": false, 00:19:43.174 "zone_append": false, 00:19:43.174 "compare": false, 00:19:43.174 "compare_and_write": false, 00:19:43.174 "abort": false, 00:19:43.174 "seek_hole": true, 00:19:43.174 "seek_data": true, 00:19:43.174 "copy": false, 00:19:43.174 "nvme_iov_md": false 00:19:43.174 }, 00:19:43.174 "driver_specific": { 00:19:43.174 "lvol": { 00:19:43.174 "lvol_store_uuid": "cecf5f1c-4843-4702-b54f-9059bb5698c6", 00:19:43.174 "base_bdev": "nvme0n1", 00:19:43.174 "thin_provision": true, 00:19:43.174 "num_allocated_clusters": 0, 00:19:43.174 "snapshot": false, 00:19:43.174 "clone": false, 00:19:43.174 "esnap_clone": false 00:19:43.174 } 00:19:43.174 } 00:19:43.174 } 00:19:43.174 ]' 00:19:43.174 16:14:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:43.174 16:14:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:43.174 16:14:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:43.174 16:14:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:43.174 16:14:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:43.174 16:14:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:19:43.174 16:14:01 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:43.174 16:14:01 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:43.174 16:14:01 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:43.433 16:14:02 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:43.433 16:14:02 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:43.433 16:14:02 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size ae6e73b4-945f-4017-8a36-5950102e7791 00:19:43.433 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=ae6e73b4-945f-4017-8a36-5950102e7791 00:19:43.433 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:43.433 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:43.433 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:43.433 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ae6e73b4-945f-4017-8a36-5950102e7791 00:19:43.692 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:43.692 { 00:19:43.692 "name": "ae6e73b4-945f-4017-8a36-5950102e7791", 00:19:43.692 "aliases": [ 00:19:43.692 "lvs/nvme0n1p0" 00:19:43.692 ], 00:19:43.692 "product_name": "Logical Volume", 00:19:43.692 "block_size": 4096, 00:19:43.692 "num_blocks": 26476544, 00:19:43.692 "uuid": "ae6e73b4-945f-4017-8a36-5950102e7791", 00:19:43.692 "assigned_rate_limits": { 00:19:43.692 "rw_ios_per_sec": 0, 00:19:43.692 "rw_mbytes_per_sec": 0, 00:19:43.692 "r_mbytes_per_sec": 0, 00:19:43.692 "w_mbytes_per_sec": 0 00:19:43.692 }, 00:19:43.692 "claimed": false, 00:19:43.692 "zoned": false, 00:19:43.692 "supported_io_types": { 00:19:43.692 "read": true, 00:19:43.692 "write": true, 00:19:43.692 "unmap": true, 00:19:43.692 "flush": false, 00:19:43.692 "reset": true, 00:19:43.692 "nvme_admin": false, 00:19:43.692 "nvme_io": false, 00:19:43.692 "nvme_io_md": false, 00:19:43.692 "write_zeroes": true, 00:19:43.692 "zcopy": false, 00:19:43.692 "get_zone_info": false, 00:19:43.692 "zone_management": false, 00:19:43.692 "zone_append": false, 00:19:43.692 "compare": false, 00:19:43.692 "compare_and_write": false, 00:19:43.692 "abort": false, 00:19:43.692 "seek_hole": true, 00:19:43.692 "seek_data": true, 00:19:43.692 "copy": false, 00:19:43.692 "nvme_iov_md": false 00:19:43.692 }, 00:19:43.692 "driver_specific": { 00:19:43.692 "lvol": { 00:19:43.692 "lvol_store_uuid": "cecf5f1c-4843-4702-b54f-9059bb5698c6", 00:19:43.692 "base_bdev": "nvme0n1", 00:19:43.692 "thin_provision": true, 00:19:43.692 "num_allocated_clusters": 0, 00:19:43.692 "snapshot": false, 00:19:43.692 "clone": false, 00:19:43.692 "esnap_clone": false 00:19:43.692 } 00:19:43.692 } 00:19:43.692 } 00:19:43.692 ]' 00:19:43.692 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:43.692 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:43.692 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:43.692 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:43.692 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:43.692 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:19:43.692 16:14:02 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:43.692 16:14:02 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:43.951 16:14:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:43.951 16:14:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size ae6e73b4-945f-4017-8a36-5950102e7791 00:19:43.951 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=ae6e73b4-945f-4017-8a36-5950102e7791 00:19:43.951 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:43.951 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:43.951 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:43.951 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ae6e73b4-945f-4017-8a36-5950102e7791 00:19:44.209 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:44.209 { 00:19:44.209 "name": "ae6e73b4-945f-4017-8a36-5950102e7791", 00:19:44.209 "aliases": [ 00:19:44.209 "lvs/nvme0n1p0" 00:19:44.209 ], 00:19:44.209 "product_name": "Logical Volume", 00:19:44.209 "block_size": 4096, 00:19:44.209 "num_blocks": 26476544, 00:19:44.209 "uuid": "ae6e73b4-945f-4017-8a36-5950102e7791", 00:19:44.209 "assigned_rate_limits": { 00:19:44.209 "rw_ios_per_sec": 0, 00:19:44.209 "rw_mbytes_per_sec": 0, 00:19:44.209 "r_mbytes_per_sec": 0, 00:19:44.209 "w_mbytes_per_sec": 0 00:19:44.209 }, 00:19:44.209 "claimed": false, 00:19:44.209 "zoned": false, 00:19:44.209 "supported_io_types": { 00:19:44.210 "read": true, 00:19:44.210 "write": true, 00:19:44.210 "unmap": true, 00:19:44.210 "flush": false, 00:19:44.210 "reset": true, 00:19:44.210 "nvme_admin": false, 00:19:44.210 "nvme_io": false, 00:19:44.210 "nvme_io_md": false, 00:19:44.210 "write_zeroes": true, 00:19:44.210 "zcopy": false, 00:19:44.210 "get_zone_info": false, 00:19:44.210 "zone_management": false, 00:19:44.210 "zone_append": false, 00:19:44.210 "compare": false, 00:19:44.210 "compare_and_write": false, 00:19:44.210 "abort": false, 00:19:44.210 "seek_hole": true, 00:19:44.210 "seek_data": true, 00:19:44.210 "copy": false, 00:19:44.210 "nvme_iov_md": false 00:19:44.210 }, 00:19:44.210 "driver_specific": { 00:19:44.210 "lvol": { 00:19:44.210 "lvol_store_uuid": "cecf5f1c-4843-4702-b54f-9059bb5698c6", 00:19:44.210 "base_bdev": "nvme0n1", 00:19:44.210 "thin_provision": true, 00:19:44.210 "num_allocated_clusters": 0, 00:19:44.210 "snapshot": false, 00:19:44.210 "clone": false, 00:19:44.210 "esnap_clone": false 00:19:44.210 } 00:19:44.210 } 00:19:44.210 } 00:19:44.210 ]' 00:19:44.210 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:44.210 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:44.210 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:44.210 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:44.210 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:44.210 16:14:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:19:44.210 16:14:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:44.210 16:14:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ae6e73b4-945f-4017-8a36-5950102e7791 -c nvc0n1p0 --l2p_dram_limit 20 00:19:44.470 [2024-11-04 16:14:03.038812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.470 [2024-11-04 16:14:03.038878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:44.470 [2024-11-04 16:14:03.038896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:44.470 [2024-11-04 16:14:03.038910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.470 [2024-11-04 16:14:03.038973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.470 [2024-11-04 16:14:03.038993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:44.470 [2024-11-04 16:14:03.039005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:44.470 [2024-11-04 16:14:03.039018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.470 [2024-11-04 16:14:03.039039] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:44.470 [2024-11-04 16:14:03.040087] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:44.470 [2024-11-04 16:14:03.040111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.470 [2024-11-04 16:14:03.040126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:44.470 [2024-11-04 16:14:03.040138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:19:44.470 [2024-11-04 16:14:03.040151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.470 [2024-11-04 16:14:03.040189] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID fa671958-8974-45c9-961a-348705fb5d75 00:19:44.470 [2024-11-04 16:14:03.042563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.470 [2024-11-04 16:14:03.042763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:44.470 [2024-11-04 16:14:03.042792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:44.470 [2024-11-04 16:14:03.042811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.470 [2024-11-04 16:14:03.056607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.470 [2024-11-04 16:14:03.056638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:44.470 [2024-11-04 16:14:03.056666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.694 ms 00:19:44.470 [2024-11-04 16:14:03.056677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.470 [2024-11-04 16:14:03.056962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.470 [2024-11-04 16:14:03.057012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:44.470 [2024-11-04 16:14:03.057037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:44.470 [2024-11-04 16:14:03.057047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.470 [2024-11-04 16:14:03.057116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.470 [2024-11-04 16:14:03.057129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:44.470 [2024-11-04 16:14:03.057145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:44.470 [2024-11-04 16:14:03.057156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.470 [2024-11-04 16:14:03.057183] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:44.470 [2024-11-04 16:14:03.062588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.470 [2024-11-04 16:14:03.062624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:44.470 [2024-11-04 16:14:03.062637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.424 ms 00:19:44.470 [2024-11-04 16:14:03.062652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.470 [2024-11-04 16:14:03.062687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.470 [2024-11-04 16:14:03.062701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:44.470 [2024-11-04 16:14:03.062712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:44.470 [2024-11-04 16:14:03.062725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.470 [2024-11-04 16:14:03.062770] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:44.470 [2024-11-04 16:14:03.062915] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:44.470 [2024-11-04 16:14:03.062931] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:44.470 [2024-11-04 16:14:03.062949] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:44.470 [2024-11-04 16:14:03.062964] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:44.470 [2024-11-04 16:14:03.062980] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:44.470 [2024-11-04 16:14:03.062992] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:44.470 [2024-11-04 16:14:03.063006] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:44.470 [2024-11-04 16:14:03.063016] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:44.470 [2024-11-04 16:14:03.063030] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:44.470 [2024-11-04 16:14:03.063041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.470 [2024-11-04 16:14:03.063058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:44.470 [2024-11-04 16:14:03.063069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:19:44.470 [2024-11-04 16:14:03.063083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.470 [2024-11-04 16:14:03.063155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.470 [2024-11-04 16:14:03.063172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:44.470 [2024-11-04 16:14:03.063183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:44.470 [2024-11-04 16:14:03.063199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.470 [2024-11-04 16:14:03.063278] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:44.470 [2024-11-04 16:14:03.063294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:44.471 [2024-11-04 16:14:03.063309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.471 [2024-11-04 16:14:03.063322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.471 [2024-11-04 16:14:03.063333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:44.471 [2024-11-04 16:14:03.063346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:44.471 [2024-11-04 16:14:03.063356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:44.471 [2024-11-04 16:14:03.063368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:44.471 [2024-11-04 16:14:03.063377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:44.471 [2024-11-04 16:14:03.063390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.471 [2024-11-04 16:14:03.063400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:44.471 [2024-11-04 16:14:03.063412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:44.471 [2024-11-04 16:14:03.063422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.471 [2024-11-04 16:14:03.063447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:44.471 [2024-11-04 16:14:03.063464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:44.471 [2024-11-04 16:14:03.063481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.471 [2024-11-04 16:14:03.063490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:44.471 [2024-11-04 16:14:03.063502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:44.471 [2024-11-04 16:14:03.063512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.471 [2024-11-04 16:14:03.063526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:44.471 [2024-11-04 16:14:03.063536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:44.471 [2024-11-04 16:14:03.063548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.471 [2024-11-04 16:14:03.063556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:44.471 [2024-11-04 16:14:03.063569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:44.471 [2024-11-04 16:14:03.063578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.471 [2024-11-04 16:14:03.063590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:44.471 [2024-11-04 16:14:03.063599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:44.471 [2024-11-04 16:14:03.063611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.471 [2024-11-04 16:14:03.063621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:44.471 [2024-11-04 16:14:03.063633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:44.471 [2024-11-04 16:14:03.063642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.471 [2024-11-04 16:14:03.063658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:44.471 [2024-11-04 16:14:03.063667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:44.471 [2024-11-04 16:14:03.063680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.471 [2024-11-04 16:14:03.063690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:44.471 [2024-11-04 16:14:03.063702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:44.471 [2024-11-04 16:14:03.063711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.471 [2024-11-04 16:14:03.063724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:44.471 [2024-11-04 16:14:03.063733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:44.471 [2024-11-04 16:14:03.063759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.471 [2024-11-04 16:14:03.063769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:44.471 [2024-11-04 16:14:03.063792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:44.471 [2024-11-04 16:14:03.063802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.471 [2024-11-04 16:14:03.063813] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:44.471 [2024-11-04 16:14:03.063824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:44.471 [2024-11-04 16:14:03.063836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.471 [2024-11-04 16:14:03.063849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.471 [2024-11-04 16:14:03.063866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:44.471 [2024-11-04 16:14:03.063875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:44.471 [2024-11-04 16:14:03.063887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:44.471 [2024-11-04 16:14:03.063896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:44.471 [2024-11-04 16:14:03.063908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:44.471 [2024-11-04 16:14:03.063917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:44.471 [2024-11-04 16:14:03.063934] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:44.471 [2024-11-04 16:14:03.063946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.471 [2024-11-04 16:14:03.063960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:44.471 [2024-11-04 16:14:03.063970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:44.471 [2024-11-04 16:14:03.063983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:44.471 [2024-11-04 16:14:03.063993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:44.471 [2024-11-04 16:14:03.064006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:44.471 [2024-11-04 16:14:03.064016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:44.471 [2024-11-04 16:14:03.064028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:44.471 [2024-11-04 16:14:03.064038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:44.471 [2024-11-04 16:14:03.064053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:44.471 [2024-11-04 16:14:03.064063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:44.471 [2024-11-04 16:14:03.064075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:44.471 [2024-11-04 16:14:03.064085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:44.471 [2024-11-04 16:14:03.064098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:44.471 [2024-11-04 16:14:03.064108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:44.471 [2024-11-04 16:14:03.064120] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:44.471 [2024-11-04 16:14:03.064131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.471 [2024-11-04 16:14:03.064149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:44.471 [2024-11-04 16:14:03.064159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:44.471 [2024-11-04 16:14:03.064172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:44.471 [2024-11-04 16:14:03.064182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:44.471 [2024-11-04 16:14:03.064197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.471 [2024-11-04 16:14:03.064210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:44.471 [2024-11-04 16:14:03.064223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:19:44.471 [2024-11-04 16:14:03.064236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.471 [2024-11-04 16:14:03.064281] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:44.471 [2024-11-04 16:14:03.064295] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:48.662 [2024-11-04 16:14:06.495934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.496004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:48.662 [2024-11-04 16:14:06.496029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3437.219 ms 00:19:48.662 [2024-11-04 16:14:06.496039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.535208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.535464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:48.662 [2024-11-04 16:14:06.535496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.923 ms 00:19:48.662 [2024-11-04 16:14:06.535508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.535639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.535653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:48.662 [2024-11-04 16:14:06.535671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:48.662 [2024-11-04 16:14:06.535681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.609347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.609526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:48.662 [2024-11-04 16:14:06.609574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.742 ms 00:19:48.662 [2024-11-04 16:14:06.609585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.609626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.609640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:48.662 [2024-11-04 16:14:06.609654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:48.662 [2024-11-04 16:14:06.609664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.610181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.610197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:48.662 [2024-11-04 16:14:06.610210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:19:48.662 [2024-11-04 16:14:06.610220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.610329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.610342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:48.662 [2024-11-04 16:14:06.610358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:19:48.662 [2024-11-04 16:14:06.610368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.628326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.628376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:48.662 [2024-11-04 16:14:06.628393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.967 ms 00:19:48.662 [2024-11-04 16:14:06.628419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.640456] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:48.662 [2024-11-04 16:14:06.646437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.646470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:48.662 [2024-11-04 16:14:06.646482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.968 ms 00:19:48.662 [2024-11-04 16:14:06.646502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.734380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.734655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:48.662 [2024-11-04 16:14:06.734698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.976 ms 00:19:48.662 [2024-11-04 16:14:06.734711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.734907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.734927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:48.662 [2024-11-04 16:14:06.734939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:19:48.662 [2024-11-04 16:14:06.734952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.770373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.770415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:48.662 [2024-11-04 16:14:06.770429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.422 ms 00:19:48.662 [2024-11-04 16:14:06.770441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.804377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.804547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:48.662 [2024-11-04 16:14:06.804569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.952 ms 00:19:48.662 [2024-11-04 16:14:06.804582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.805318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.805344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:48.662 [2024-11-04 16:14:06.805356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:19:48.662 [2024-11-04 16:14:06.805368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.905309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.905363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:48.662 [2024-11-04 16:14:06.905379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.048 ms 00:19:48.662 [2024-11-04 16:14:06.905392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.942029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.942087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:48.662 [2024-11-04 16:14:06.942102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.617 ms 00:19:48.662 [2024-11-04 16:14:06.942134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:06.977722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:06.977794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:48.662 [2024-11-04 16:14:06.977825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.605 ms 00:19:48.662 [2024-11-04 16:14:06.977837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:07.012070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:07.012111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:48.662 [2024-11-04 16:14:07.012124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.248 ms 00:19:48.662 [2024-11-04 16:14:07.012136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:07.012179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:07.012195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:48.662 [2024-11-04 16:14:07.012206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:48.662 [2024-11-04 16:14:07.012217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:07.012312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.662 [2024-11-04 16:14:07.012326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:48.662 [2024-11-04 16:14:07.012336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:48.662 [2024-11-04 16:14:07.012348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.662 [2024-11-04 16:14:07.013393] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3980.632 ms, result 0 00:19:48.662 { 00:19:48.662 "name": "ftl0", 00:19:48.662 "uuid": "fa671958-8974-45c9-961a-348705fb5d75" 00:19:48.662 } 00:19:48.662 16:14:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:48.662 16:14:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:48.662 16:14:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:48.662 16:14:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:48.662 [2024-11-04 16:14:07.317327] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:48.662 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:48.662 Zero copy mechanism will not be used. 00:19:48.662 Running I/O for 4 seconds... 00:19:50.606 1477.00 IOPS, 98.08 MiB/s [2024-11-04T16:14:10.705Z] 1513.00 IOPS, 100.47 MiB/s [2024-11-04T16:14:11.641Z] 1550.67 IOPS, 102.97 MiB/s [2024-11-04T16:14:11.641Z] 1581.50 IOPS, 105.02 MiB/s 00:19:52.919 Latency(us) 00:19:52.919 [2024-11-04T16:14:11.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.919 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:52.919 ftl0 : 4.00 1581.15 105.00 0.00 0.00 666.65 228.65 1947.66 00:19:52.919 [2024-11-04T16:14:11.641Z] =================================================================================================================== 00:19:52.919 [2024-11-04T16:14:11.641Z] Total : 1581.15 105.00 0.00 0.00 666.65 228.65 1947.66 00:19:52.919 { 00:19:52.919 "results": [ 00:19:52.919 { 00:19:52.919 "job": "ftl0", 00:19:52.919 "core_mask": "0x1", 00:19:52.919 "workload": "randwrite", 00:19:52.919 "status": "finished", 00:19:52.919 "queue_depth": 1, 00:19:52.919 "io_size": 69632, 00:19:52.919 "runtime": 4.001509, 00:19:52.919 "iops": 1581.1535098384134, 00:19:52.919 "mibps": 104.99847526270713, 00:19:52.919 "io_failed": 0, 00:19:52.919 "io_timeout": 0, 00:19:52.919 "avg_latency_us": 666.6514468812503, 00:19:52.919 "min_latency_us": 228.65220883534136, 00:19:52.919 "max_latency_us": 1947.6562248995983 00:19:52.919 } 00:19:52.919 ], 00:19:52.919 "core_count": 1 00:19:52.919 } 00:19:52.919 [2024-11-04 16:14:11.321808] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:52.919 16:14:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:52.919 [2024-11-04 16:14:11.438255] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:52.919 Running I/O for 4 seconds... 00:19:54.790 10177.00 IOPS, 39.75 MiB/s [2024-11-04T16:14:14.447Z] 10183.50 IOPS, 39.78 MiB/s [2024-11-04T16:14:15.821Z] 10315.33 IOPS, 40.29 MiB/s [2024-11-04T16:14:15.821Z] 10585.25 IOPS, 41.35 MiB/s 00:19:57.099 Latency(us) 00:19:57.099 [2024-11-04T16:14:15.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.099 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:57.099 ftl0 : 4.02 10577.30 41.32 0.00 0.00 12077.75 220.43 32846.96 00:19:57.099 [2024-11-04T16:14:15.821Z] =================================================================================================================== 00:19:57.099 [2024-11-04T16:14:15.821Z] Total : 10577.30 41.32 0.00 0.00 12077.75 0.00 32846.96 00:19:57.099 [2024-11-04 16:14:15.456821] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:57.099 { 00:19:57.099 "results": [ 00:19:57.099 { 00:19:57.099 "job": "ftl0", 00:19:57.099 "core_mask": "0x1", 00:19:57.099 "workload": "randwrite", 00:19:57.099 "status": "finished", 00:19:57.099 "queue_depth": 128, 00:19:57.099 "io_size": 4096, 00:19:57.099 "runtime": 4.015013, 00:19:57.099 "iops": 10577.300745975168, 00:19:57.099 "mibps": 41.3175810389655, 00:19:57.099 "io_failed": 0, 00:19:57.099 "io_timeout": 0, 00:19:57.099 "avg_latency_us": 12077.751401045456, 00:19:57.099 "min_latency_us": 220.4273092369478, 00:19:57.099 "max_latency_us": 32846.95903614458 00:19:57.099 } 00:19:57.099 ], 00:19:57.099 "core_count": 1 00:19:57.099 } 00:19:57.099 16:14:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:57.099 [2024-11-04 16:14:15.595995] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:57.099 Running I/O for 4 seconds... 00:19:58.971 8166.00 IOPS, 31.90 MiB/s [2024-11-04T16:14:18.628Z] 8235.00 IOPS, 32.17 MiB/s [2024-11-04T16:14:20.007Z] 8188.33 IOPS, 31.99 MiB/s [2024-11-04T16:14:20.007Z] 8093.25 IOPS, 31.61 MiB/s 00:20:01.285 Latency(us) 00:20:01.285 [2024-11-04T16:14:20.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.285 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:01.285 Verification LBA range: start 0x0 length 0x1400000 00:20:01.285 ftl0 : 4.01 8103.21 31.65 0.00 0.00 15748.81 273.07 32425.84 00:20:01.285 [2024-11-04T16:14:20.007Z] =================================================================================================================== 00:20:01.285 [2024-11-04T16:14:20.007Z] Total : 8103.21 31.65 0.00 0.00 15748.81 0.00 32425.84 00:20:01.285 [2024-11-04 16:14:19.618939] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:01.285 { 00:20:01.285 "results": [ 00:20:01.285 { 00:20:01.285 "job": "ftl0", 00:20:01.285 "core_mask": "0x1", 00:20:01.285 "workload": "verify", 00:20:01.285 "status": "finished", 00:20:01.285 "verify_range": { 00:20:01.285 "start": 0, 00:20:01.285 "length": 20971520 00:20:01.285 }, 00:20:01.285 "queue_depth": 128, 00:20:01.285 "io_size": 4096, 00:20:01.285 "runtime": 4.010384, 00:20:01.285 "iops": 8103.21405630982, 00:20:01.285 "mibps": 31.653179907460235, 00:20:01.285 "io_failed": 0, 00:20:01.285 "io_timeout": 0, 00:20:01.285 "avg_latency_us": 15748.813861223889, 00:20:01.285 "min_latency_us": 273.06666666666666, 00:20:01.285 "max_latency_us": 32425.84417670683 00:20:01.285 } 00:20:01.285 ], 00:20:01.285 "core_count": 1 00:20:01.285 } 00:20:01.285 16:14:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:20:01.285 [2024-11-04 16:14:19.825789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.285 [2024-11-04 16:14:19.825844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:01.285 [2024-11-04 16:14:19.825881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:01.285 [2024-11-04 16:14:19.825897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.285 [2024-11-04 16:14:19.825924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:01.285 [2024-11-04 16:14:19.829792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.285 [2024-11-04 16:14:19.829826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:01.285 [2024-11-04 16:14:19.829843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.849 ms 00:20:01.285 [2024-11-04 16:14:19.829872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.285 [2024-11-04 16:14:19.831740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.285 [2024-11-04 16:14:19.831797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:01.285 [2024-11-04 16:14:19.831820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.832 ms 00:20:01.285 [2024-11-04 16:14:19.831834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.550 [2024-11-04 16:14:20.050136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.550 [2024-11-04 16:14:20.050195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:01.550 [2024-11-04 16:14:20.050221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 218.617 ms 00:20:01.550 [2024-11-04 16:14:20.050234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.550 [2024-11-04 16:14:20.055206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.550 [2024-11-04 16:14:20.055253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:01.550 [2024-11-04 16:14:20.055273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.928 ms 00:20:01.550 [2024-11-04 16:14:20.055285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.550 [2024-11-04 16:14:20.091373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.550 [2024-11-04 16:14:20.091565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:01.550 [2024-11-04 16:14:20.091597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.061 ms 00:20:01.550 [2024-11-04 16:14:20.091610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.550 [2024-11-04 16:14:20.112617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.550 [2024-11-04 16:14:20.112661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:01.550 [2024-11-04 16:14:20.112685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.994 ms 00:20:01.550 [2024-11-04 16:14:20.112697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.550 [2024-11-04 16:14:20.112894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.550 [2024-11-04 16:14:20.112911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:01.550 [2024-11-04 16:14:20.112929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:20:01.550 [2024-11-04 16:14:20.112941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.550 [2024-11-04 16:14:20.147914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.550 [2024-11-04 16:14:20.147967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:01.550 [2024-11-04 16:14:20.147987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.004 ms 00:20:01.550 [2024-11-04 16:14:20.148015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.550 [2024-11-04 16:14:20.181622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.550 [2024-11-04 16:14:20.181662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:01.550 [2024-11-04 16:14:20.181680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.614 ms 00:20:01.550 [2024-11-04 16:14:20.181691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.550 [2024-11-04 16:14:20.215699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.550 [2024-11-04 16:14:20.215740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:01.550 [2024-11-04 16:14:20.215772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.017 ms 00:20:01.550 [2024-11-04 16:14:20.215783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.550 [2024-11-04 16:14:20.249758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.550 [2024-11-04 16:14:20.249797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:01.550 [2024-11-04 16:14:20.249824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.892 ms 00:20:01.550 [2024-11-04 16:14:20.249835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.550 [2024-11-04 16:14:20.249877] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:01.550 [2024-11-04 16:14:20.249895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:01.550 [2024-11-04 16:14:20.249912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:01.550 [2024-11-04 16:14:20.249925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:01.550 [2024-11-04 16:14:20.249940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:01.550 [2024-11-04 16:14:20.249952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:01.550 [2024-11-04 16:14:20.249967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:01.550 [2024-11-04 16:14:20.249979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:01.550 [2024-11-04 16:14:20.249993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:01.550 [2024-11-04 16:14:20.250005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.250993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:01.551 [2024-11-04 16:14:20.251345] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:01.551 [2024-11-04 16:14:20.251360] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: fa671958-8974-45c9-961a-348705fb5d75 00:20:01.551 [2024-11-04 16:14:20.251372] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:01.551 [2024-11-04 16:14:20.251386] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:01.551 [2024-11-04 16:14:20.251401] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:01.551 [2024-11-04 16:14:20.251416] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:01.551 [2024-11-04 16:14:20.251428] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:01.551 [2024-11-04 16:14:20.251454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:01.551 [2024-11-04 16:14:20.251466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:01.551 [2024-11-04 16:14:20.251482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:01.551 [2024-11-04 16:14:20.251493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:01.551 [2024-11-04 16:14:20.251508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.551 [2024-11-04 16:14:20.251520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:01.551 [2024-11-04 16:14:20.251535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:20:01.551 [2024-11-04 16:14:20.251547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.809 [2024-11-04 16:14:20.270904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.809 [2024-11-04 16:14:20.270940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:01.809 [2024-11-04 16:14:20.270958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.331 ms 00:20:01.809 [2024-11-04 16:14:20.270969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.809 [2024-11-04 16:14:20.271501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.809 [2024-11-04 16:14:20.271536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:01.809 [2024-11-04 16:14:20.271551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:20:01.809 [2024-11-04 16:14:20.271563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.809 [2024-11-04 16:14:20.323107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.809 [2024-11-04 16:14:20.323147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.809 [2024-11-04 16:14:20.323167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.809 [2024-11-04 16:14:20.323196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.809 [2024-11-04 16:14:20.323257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.809 [2024-11-04 16:14:20.323270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:01.809 [2024-11-04 16:14:20.323285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.809 [2024-11-04 16:14:20.323297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.809 [2024-11-04 16:14:20.323430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.809 [2024-11-04 16:14:20.323449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:01.809 [2024-11-04 16:14:20.323464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.809 [2024-11-04 16:14:20.323476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.809 [2024-11-04 16:14:20.323498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.809 [2024-11-04 16:14:20.323510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:01.809 [2024-11-04 16:14:20.323525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.809 [2024-11-04 16:14:20.323536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.810 [2024-11-04 16:14:20.440191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.810 [2024-11-04 16:14:20.440443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.810 [2024-11-04 16:14:20.440495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.810 [2024-11-04 16:14:20.440508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-04 16:14:20.533118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.068 [2024-11-04 16:14:20.533167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:02.068 [2024-11-04 16:14:20.533186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.068 [2024-11-04 16:14:20.533197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-04 16:14:20.533320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.068 [2024-11-04 16:14:20.533334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:02.068 [2024-11-04 16:14:20.533353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.068 [2024-11-04 16:14:20.533364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-04 16:14:20.533419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.068 [2024-11-04 16:14:20.533432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:02.068 [2024-11-04 16:14:20.533447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.068 [2024-11-04 16:14:20.533458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-04 16:14:20.533566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.068 [2024-11-04 16:14:20.533581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:02.068 [2024-11-04 16:14:20.533602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.068 [2024-11-04 16:14:20.533613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-04 16:14:20.533657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.068 [2024-11-04 16:14:20.533671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:02.068 [2024-11-04 16:14:20.533685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.068 [2024-11-04 16:14:20.533696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-04 16:14:20.533738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.068 [2024-11-04 16:14:20.533778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:02.068 [2024-11-04 16:14:20.533810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.068 [2024-11-04 16:14:20.533826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-04 16:14:20.533888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.068 [2024-11-04 16:14:20.533913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:02.068 [2024-11-04 16:14:20.533944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.068 [2024-11-04 16:14:20.533956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-04 16:14:20.534097] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 709.410 ms, result 0 00:20:02.068 true 00:20:02.068 16:14:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75032 00:20:02.068 16:14:20 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 75032 ']' 00:20:02.068 16:14:20 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 75032 00:20:02.068 16:14:20 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:20:02.068 16:14:20 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:02.068 16:14:20 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75032 00:20:02.068 killing process with pid 75032 00:20:02.068 Received shutdown signal, test time was about 4.000000 seconds 00:20:02.068 00:20:02.068 Latency(us) 00:20:02.068 [2024-11-04T16:14:20.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.068 [2024-11-04T16:14:20.790Z] =================================================================================================================== 00:20:02.068 [2024-11-04T16:14:20.790Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.068 16:14:20 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:02.068 16:14:20 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:02.068 16:14:20 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75032' 00:20:02.068 16:14:20 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 75032 00:20:02.068 16:14:20 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 75032 00:20:07.337 16:14:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:07.337 16:14:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:20:07.337 Remove shared memory files 00:20:07.337 16:14:25 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:07.337 16:14:25 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:20:07.337 16:14:25 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:20:07.337 16:14:25 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:20:07.337 16:14:25 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:07.337 16:14:25 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:20:07.337 ************************************ 00:20:07.337 END TEST ftl_bdevperf 00:20:07.337 ************************************ 00:20:07.337 00:20:07.337 real 0m26.608s 00:20:07.337 user 0m28.952s 00:20:07.337 sys 0m1.344s 00:20:07.337 16:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:07.337 16:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:07.337 16:14:25 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:07.337 16:14:25 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:20:07.337 16:14:25 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:07.337 16:14:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:07.337 ************************************ 00:20:07.337 START TEST ftl_trim 00:20:07.337 ************************************ 00:20:07.337 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:20:07.337 * Looking for test storage... 00:20:07.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:07.337 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:07.337 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:20:07.337 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:07.337 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.337 16:14:25 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:20:07.337 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.337 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:07.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.337 --rc genhtml_branch_coverage=1 00:20:07.337 --rc genhtml_function_coverage=1 00:20:07.337 --rc genhtml_legend=1 00:20:07.337 --rc geninfo_all_blocks=1 00:20:07.337 --rc geninfo_unexecuted_blocks=1 00:20:07.337 00:20:07.337 ' 00:20:07.337 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:07.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.337 --rc genhtml_branch_coverage=1 00:20:07.337 --rc genhtml_function_coverage=1 00:20:07.337 --rc genhtml_legend=1 00:20:07.337 --rc geninfo_all_blocks=1 00:20:07.337 --rc geninfo_unexecuted_blocks=1 00:20:07.337 00:20:07.337 ' 00:20:07.337 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:07.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.337 --rc genhtml_branch_coverage=1 00:20:07.337 --rc genhtml_function_coverage=1 00:20:07.337 --rc genhtml_legend=1 00:20:07.337 --rc geninfo_all_blocks=1 00:20:07.337 --rc geninfo_unexecuted_blocks=1 00:20:07.337 00:20:07.337 ' 00:20:07.337 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:07.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.337 --rc genhtml_branch_coverage=1 00:20:07.337 --rc genhtml_function_coverage=1 00:20:07.337 --rc genhtml_legend=1 00:20:07.337 --rc geninfo_all_blocks=1 00:20:07.337 --rc geninfo_unexecuted_blocks=1 00:20:07.337 00:20:07.337 ' 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75384 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:20:07.337 16:14:25 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75384 00:20:07.337 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75384 ']' 00:20:07.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.337 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.338 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:07.338 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.338 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:07.338 16:14:25 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:07.338 [2024-11-04 16:14:25.966312] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:20:07.338 [2024-11-04 16:14:25.966625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75384 ] 00:20:07.595 [2024-11-04 16:14:26.149514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:07.595 [2024-11-04 16:14:26.249588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.595 [2024-11-04 16:14:26.249720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.595 [2024-11-04 16:14:26.249788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.533 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:08.533 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:20:08.533 16:14:27 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:08.533 16:14:27 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:20:08.533 16:14:27 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:08.533 16:14:27 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:20:08.533 16:14:27 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:20:08.533 16:14:27 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:08.791 16:14:27 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:08.791 16:14:27 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:20:08.791 16:14:27 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:08.791 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:20:08.791 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:08.791 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:20:08.791 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:20:08.791 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:09.049 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:09.049 { 00:20:09.049 "name": "nvme0n1", 00:20:09.049 "aliases": [ 00:20:09.049 "b79a1f92-cd9a-4bef-97c6-1852a96d3572" 00:20:09.050 ], 00:20:09.050 "product_name": "NVMe disk", 00:20:09.050 "block_size": 4096, 00:20:09.050 "num_blocks": 1310720, 00:20:09.050 "uuid": "b79a1f92-cd9a-4bef-97c6-1852a96d3572", 00:20:09.050 "numa_id": -1, 00:20:09.050 "assigned_rate_limits": { 00:20:09.050 "rw_ios_per_sec": 0, 00:20:09.050 "rw_mbytes_per_sec": 0, 00:20:09.050 "r_mbytes_per_sec": 0, 00:20:09.050 "w_mbytes_per_sec": 0 00:20:09.050 }, 00:20:09.050 "claimed": true, 00:20:09.050 "claim_type": "read_many_write_one", 00:20:09.050 "zoned": false, 00:20:09.050 "supported_io_types": { 00:20:09.050 "read": true, 00:20:09.050 "write": true, 00:20:09.050 "unmap": true, 00:20:09.050 "flush": true, 00:20:09.050 "reset": true, 00:20:09.050 "nvme_admin": true, 00:20:09.050 "nvme_io": true, 00:20:09.050 "nvme_io_md": false, 00:20:09.050 "write_zeroes": true, 00:20:09.050 "zcopy": false, 00:20:09.050 "get_zone_info": false, 00:20:09.050 "zone_management": false, 00:20:09.050 "zone_append": false, 00:20:09.050 "compare": true, 00:20:09.050 "compare_and_write": false, 00:20:09.050 "abort": true, 00:20:09.050 "seek_hole": false, 00:20:09.050 "seek_data": false, 00:20:09.050 "copy": true, 00:20:09.050 "nvme_iov_md": false 00:20:09.050 }, 00:20:09.050 "driver_specific": { 00:20:09.050 "nvme": [ 00:20:09.050 { 00:20:09.050 "pci_address": "0000:00:11.0", 00:20:09.050 "trid": { 00:20:09.050 "trtype": "PCIe", 00:20:09.050 "traddr": "0000:00:11.0" 00:20:09.050 }, 00:20:09.050 "ctrlr_data": { 00:20:09.050 "cntlid": 0, 00:20:09.050 "vendor_id": "0x1b36", 00:20:09.050 "model_number": "QEMU NVMe Ctrl", 00:20:09.050 "serial_number": "12341", 00:20:09.050 "firmware_revision": "8.0.0", 00:20:09.050 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:09.050 "oacs": { 00:20:09.050 "security": 0, 00:20:09.050 "format": 1, 00:20:09.050 "firmware": 0, 00:20:09.050 "ns_manage": 1 00:20:09.050 }, 00:20:09.050 "multi_ctrlr": false, 00:20:09.050 "ana_reporting": false 00:20:09.050 }, 00:20:09.050 "vs": { 00:20:09.050 "nvme_version": "1.4" 00:20:09.050 }, 00:20:09.050 "ns_data": { 00:20:09.050 "id": 1, 00:20:09.050 "can_share": false 00:20:09.050 } 00:20:09.050 } 00:20:09.050 ], 00:20:09.050 "mp_policy": "active_passive" 00:20:09.050 } 00:20:09.050 } 00:20:09.050 ]' 00:20:09.050 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:09.050 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:20:09.050 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:09.050 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:20:09.050 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:20:09.050 16:14:27 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:20:09.050 16:14:27 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:20:09.050 16:14:27 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:09.050 16:14:27 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:20:09.050 16:14:27 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:09.050 16:14:27 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:09.341 16:14:27 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=cecf5f1c-4843-4702-b54f-9059bb5698c6 00:20:09.341 16:14:27 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:20:09.341 16:14:27 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cecf5f1c-4843-4702-b54f-9059bb5698c6 00:20:09.599 16:14:28 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:09.599 16:14:28 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=e1ab02a5-1cea-48ac-9fe0-60e0efe484b9 00:20:09.599 16:14:28 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e1ab02a5-1cea-48ac-9fe0-60e0efe484b9 00:20:09.858 16:14:28 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=008b8946-8b30-4818-bde7-80abe5ebfafe 00:20:09.858 16:14:28 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 008b8946-8b30-4818-bde7-80abe5ebfafe 00:20:09.858 16:14:28 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:20:09.858 16:14:28 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:09.858 16:14:28 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=008b8946-8b30-4818-bde7-80abe5ebfafe 00:20:09.858 16:14:28 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:20:09.858 16:14:28 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 008b8946-8b30-4818-bde7-80abe5ebfafe 00:20:09.858 16:14:28 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=008b8946-8b30-4818-bde7-80abe5ebfafe 00:20:09.858 16:14:28 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:09.858 16:14:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:20:09.858 16:14:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:20:09.858 16:14:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 008b8946-8b30-4818-bde7-80abe5ebfafe 00:20:10.117 16:14:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:10.117 { 00:20:10.117 "name": "008b8946-8b30-4818-bde7-80abe5ebfafe", 00:20:10.117 "aliases": [ 00:20:10.117 "lvs/nvme0n1p0" 00:20:10.117 ], 00:20:10.117 "product_name": "Logical Volume", 00:20:10.117 "block_size": 4096, 00:20:10.117 "num_blocks": 26476544, 00:20:10.117 "uuid": "008b8946-8b30-4818-bde7-80abe5ebfafe", 00:20:10.117 "assigned_rate_limits": { 00:20:10.117 "rw_ios_per_sec": 0, 00:20:10.117 "rw_mbytes_per_sec": 0, 00:20:10.117 "r_mbytes_per_sec": 0, 00:20:10.117 "w_mbytes_per_sec": 0 00:20:10.117 }, 00:20:10.117 "claimed": false, 00:20:10.117 "zoned": false, 00:20:10.117 "supported_io_types": { 00:20:10.117 "read": true, 00:20:10.117 "write": true, 00:20:10.117 "unmap": true, 00:20:10.117 "flush": false, 00:20:10.117 "reset": true, 00:20:10.117 "nvme_admin": false, 00:20:10.117 "nvme_io": false, 00:20:10.117 "nvme_io_md": false, 00:20:10.117 "write_zeroes": true, 00:20:10.117 "zcopy": false, 00:20:10.117 "get_zone_info": false, 00:20:10.117 "zone_management": false, 00:20:10.117 "zone_append": false, 00:20:10.117 "compare": false, 00:20:10.117 "compare_and_write": false, 00:20:10.117 "abort": false, 00:20:10.117 "seek_hole": true, 00:20:10.117 "seek_data": true, 00:20:10.117 "copy": false, 00:20:10.117 "nvme_iov_md": false 00:20:10.117 }, 00:20:10.117 "driver_specific": { 00:20:10.117 "lvol": { 00:20:10.117 "lvol_store_uuid": "e1ab02a5-1cea-48ac-9fe0-60e0efe484b9", 00:20:10.117 "base_bdev": "nvme0n1", 00:20:10.117 "thin_provision": true, 00:20:10.117 "num_allocated_clusters": 0, 00:20:10.117 "snapshot": false, 00:20:10.117 "clone": false, 00:20:10.117 "esnap_clone": false 00:20:10.117 } 00:20:10.117 } 00:20:10.117 } 00:20:10.117 ]' 00:20:10.117 16:14:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:10.117 16:14:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:20:10.117 16:14:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:10.117 16:14:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:10.117 16:14:28 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:10.118 16:14:28 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:20:10.118 16:14:28 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:20:10.118 16:14:28 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:20:10.118 16:14:28 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:10.376 16:14:29 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:10.376 16:14:29 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:10.376 16:14:29 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 008b8946-8b30-4818-bde7-80abe5ebfafe 00:20:10.376 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=008b8946-8b30-4818-bde7-80abe5ebfafe 00:20:10.376 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:10.376 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:20:10.376 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:20:10.376 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 008b8946-8b30-4818-bde7-80abe5ebfafe 00:20:10.635 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:10.635 { 00:20:10.635 "name": "008b8946-8b30-4818-bde7-80abe5ebfafe", 00:20:10.635 "aliases": [ 00:20:10.635 "lvs/nvme0n1p0" 00:20:10.635 ], 00:20:10.635 "product_name": "Logical Volume", 00:20:10.635 "block_size": 4096, 00:20:10.635 "num_blocks": 26476544, 00:20:10.635 "uuid": "008b8946-8b30-4818-bde7-80abe5ebfafe", 00:20:10.635 "assigned_rate_limits": { 00:20:10.635 "rw_ios_per_sec": 0, 00:20:10.635 "rw_mbytes_per_sec": 0, 00:20:10.635 "r_mbytes_per_sec": 0, 00:20:10.635 "w_mbytes_per_sec": 0 00:20:10.635 }, 00:20:10.635 "claimed": false, 00:20:10.635 "zoned": false, 00:20:10.635 "supported_io_types": { 00:20:10.635 "read": true, 00:20:10.635 "write": true, 00:20:10.635 "unmap": true, 00:20:10.635 "flush": false, 00:20:10.635 "reset": true, 00:20:10.635 "nvme_admin": false, 00:20:10.635 "nvme_io": false, 00:20:10.635 "nvme_io_md": false, 00:20:10.635 "write_zeroes": true, 00:20:10.635 "zcopy": false, 00:20:10.635 "get_zone_info": false, 00:20:10.635 "zone_management": false, 00:20:10.635 "zone_append": false, 00:20:10.635 "compare": false, 00:20:10.635 "compare_and_write": false, 00:20:10.635 "abort": false, 00:20:10.635 "seek_hole": true, 00:20:10.635 "seek_data": true, 00:20:10.635 "copy": false, 00:20:10.635 "nvme_iov_md": false 00:20:10.635 }, 00:20:10.635 "driver_specific": { 00:20:10.635 "lvol": { 00:20:10.635 "lvol_store_uuid": "e1ab02a5-1cea-48ac-9fe0-60e0efe484b9", 00:20:10.635 "base_bdev": "nvme0n1", 00:20:10.635 "thin_provision": true, 00:20:10.635 "num_allocated_clusters": 0, 00:20:10.635 "snapshot": false, 00:20:10.635 "clone": false, 00:20:10.635 "esnap_clone": false 00:20:10.635 } 00:20:10.635 } 00:20:10.635 } 00:20:10.635 ]' 00:20:10.635 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:10.635 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:20:10.635 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:10.894 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:10.894 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:10.894 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:20:10.894 16:14:29 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:20:10.894 16:14:29 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:10.894 16:14:29 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:20:10.894 16:14:29 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:20:10.894 16:14:29 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 008b8946-8b30-4818-bde7-80abe5ebfafe 00:20:10.894 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=008b8946-8b30-4818-bde7-80abe5ebfafe 00:20:10.894 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:10.894 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:20:10.894 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:20:10.894 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 008b8946-8b30-4818-bde7-80abe5ebfafe 00:20:11.156 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:11.156 { 00:20:11.156 "name": "008b8946-8b30-4818-bde7-80abe5ebfafe", 00:20:11.156 "aliases": [ 00:20:11.156 "lvs/nvme0n1p0" 00:20:11.156 ], 00:20:11.156 "product_name": "Logical Volume", 00:20:11.156 "block_size": 4096, 00:20:11.156 "num_blocks": 26476544, 00:20:11.156 "uuid": "008b8946-8b30-4818-bde7-80abe5ebfafe", 00:20:11.156 "assigned_rate_limits": { 00:20:11.156 "rw_ios_per_sec": 0, 00:20:11.156 "rw_mbytes_per_sec": 0, 00:20:11.156 "r_mbytes_per_sec": 0, 00:20:11.156 "w_mbytes_per_sec": 0 00:20:11.156 }, 00:20:11.156 "claimed": false, 00:20:11.156 "zoned": false, 00:20:11.156 "supported_io_types": { 00:20:11.156 "read": true, 00:20:11.156 "write": true, 00:20:11.156 "unmap": true, 00:20:11.156 "flush": false, 00:20:11.156 "reset": true, 00:20:11.156 "nvme_admin": false, 00:20:11.156 "nvme_io": false, 00:20:11.156 "nvme_io_md": false, 00:20:11.156 "write_zeroes": true, 00:20:11.156 "zcopy": false, 00:20:11.156 "get_zone_info": false, 00:20:11.156 "zone_management": false, 00:20:11.156 "zone_append": false, 00:20:11.156 "compare": false, 00:20:11.156 "compare_and_write": false, 00:20:11.156 "abort": false, 00:20:11.156 "seek_hole": true, 00:20:11.156 "seek_data": true, 00:20:11.156 "copy": false, 00:20:11.156 "nvme_iov_md": false 00:20:11.156 }, 00:20:11.156 "driver_specific": { 00:20:11.156 "lvol": { 00:20:11.156 "lvol_store_uuid": "e1ab02a5-1cea-48ac-9fe0-60e0efe484b9", 00:20:11.156 "base_bdev": "nvme0n1", 00:20:11.156 "thin_provision": true, 00:20:11.156 "num_allocated_clusters": 0, 00:20:11.156 "snapshot": false, 00:20:11.156 "clone": false, 00:20:11.156 "esnap_clone": false 00:20:11.156 } 00:20:11.156 } 00:20:11.156 } 00:20:11.156 ]' 00:20:11.156 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:11.156 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:20:11.156 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:11.156 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:11.156 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:11.156 16:14:29 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:20:11.156 16:14:29 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:20:11.156 16:14:29 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 008b8946-8b30-4818-bde7-80abe5ebfafe -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:20:11.415 [2024-11-04 16:14:30.032469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.415 [2024-11-04 16:14:30.033148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:11.415 [2024-11-04 16:14:30.033255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:11.415 [2024-11-04 16:14:30.033326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.415 [2024-11-04 16:14:30.036732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.415 [2024-11-04 16:14:30.037017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.415 [2024-11-04 16:14:30.037137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.274 ms 00:20:11.415 [2024-11-04 16:14:30.037234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.415 [2024-11-04 16:14:30.037632] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:11.415 [2024-11-04 16:14:30.038858] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:11.415 [2024-11-04 16:14:30.039100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.415 [2024-11-04 16:14:30.039260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.415 [2024-11-04 16:14:30.039439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:20:11.415 [2024-11-04 16:14:30.039592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.415 [2024-11-04 16:14:30.040007] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 84bc5547-bd10-4723-8e79-2ff33cc227b9 00:20:11.415 [2024-11-04 16:14:30.041745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.415 [2024-11-04 16:14:30.041971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:11.415 [2024-11-04 16:14:30.042146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:11.415 [2024-11-04 16:14:30.042312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.415 [2024-11-04 16:14:30.050364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.415 [2024-11-04 16:14:30.050588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.415 [2024-11-04 16:14:30.050779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.737 ms 00:20:11.415 [2024-11-04 16:14:30.050951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.415 [2024-11-04 16:14:30.051313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.415 [2024-11-04 16:14:30.051502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.415 [2024-11-04 16:14:30.051668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:20:11.415 [2024-11-04 16:14:30.051886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.415 [2024-11-04 16:14:30.052102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.415 [2024-11-04 16:14:30.052266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:11.415 [2024-11-04 16:14:30.052449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:11.415 [2024-11-04 16:14:30.052612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.415 [2024-11-04 16:14:30.052836] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:11.415 [2024-11-04 16:14:30.058362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.415 [2024-11-04 16:14:30.058575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.415 [2024-11-04 16:14:30.058789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.540 ms 00:20:11.415 [2024-11-04 16:14:30.058984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.415 [2024-11-04 16:14:30.059225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.415 [2024-11-04 16:14:30.059394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:11.415 [2024-11-04 16:14:30.059594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:11.415 [2024-11-04 16:14:30.059794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.416 [2024-11-04 16:14:30.060036] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:11.416 [2024-11-04 16:14:30.060325] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:11.416 [2024-11-04 16:14:30.060529] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:11.416 [2024-11-04 16:14:30.060713] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:11.416 [2024-11-04 16:14:30.060934] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:11.416 [2024-11-04 16:14:30.061092] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:11.416 [2024-11-04 16:14:30.061236] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:11.416 [2024-11-04 16:14:30.061321] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:11.416 [2024-11-04 16:14:30.061380] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:11.416 [2024-11-04 16:14:30.061438] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:11.416 [2024-11-04 16:14:30.061590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.416 [2024-11-04 16:14:30.061672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:11.416 [2024-11-04 16:14:30.061745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.558 ms 00:20:11.416 [2024-11-04 16:14:30.061829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.416 [2024-11-04 16:14:30.062106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.416 [2024-11-04 16:14:30.062191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:11.416 [2024-11-04 16:14:30.062214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:11.416 [2024-11-04 16:14:30.062226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.416 [2024-11-04 16:14:30.062408] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:11.416 [2024-11-04 16:14:30.062423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:11.416 [2024-11-04 16:14:30.062439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.416 [2024-11-04 16:14:30.062453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.416 [2024-11-04 16:14:30.062468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:11.416 [2024-11-04 16:14:30.062479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:11.416 [2024-11-04 16:14:30.062494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:11.416 [2024-11-04 16:14:30.062518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:11.416 [2024-11-04 16:14:30.062534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:11.416 [2024-11-04 16:14:30.062545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.416 [2024-11-04 16:14:30.062560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:11.416 [2024-11-04 16:14:30.062571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:11.416 [2024-11-04 16:14:30.062585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.416 [2024-11-04 16:14:30.062597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:11.416 [2024-11-04 16:14:30.062613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:11.416 [2024-11-04 16:14:30.062624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.416 [2024-11-04 16:14:30.062640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:11.416 [2024-11-04 16:14:30.062652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:11.416 [2024-11-04 16:14:30.062666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.416 [2024-11-04 16:14:30.062677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:11.416 [2024-11-04 16:14:30.062694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:11.416 [2024-11-04 16:14:30.062705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.416 [2024-11-04 16:14:30.062719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:11.416 [2024-11-04 16:14:30.062730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:11.416 [2024-11-04 16:14:30.062744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.416 [2024-11-04 16:14:30.062770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:11.416 [2024-11-04 16:14:30.062784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:11.416 [2024-11-04 16:14:30.062796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.416 [2024-11-04 16:14:30.062810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:11.416 [2024-11-04 16:14:30.062822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:11.416 [2024-11-04 16:14:30.062837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.416 [2024-11-04 16:14:30.062849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:11.416 [2024-11-04 16:14:30.062866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:11.416 [2024-11-04 16:14:30.062878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.416 [2024-11-04 16:14:30.062892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:11.416 [2024-11-04 16:14:30.062904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:11.416 [2024-11-04 16:14:30.062921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.416 [2024-11-04 16:14:30.062933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:11.416 [2024-11-04 16:14:30.062947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:11.416 [2024-11-04 16:14:30.062958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.416 [2024-11-04 16:14:30.062973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:11.416 [2024-11-04 16:14:30.062984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:11.416 [2024-11-04 16:14:30.062999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.416 [2024-11-04 16:14:30.063009] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:11.416 [2024-11-04 16:14:30.063025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:11.416 [2024-11-04 16:14:30.063037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.416 [2024-11-04 16:14:30.063051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.416 [2024-11-04 16:14:30.063064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:11.416 [2024-11-04 16:14:30.063083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:11.416 [2024-11-04 16:14:30.063094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:11.416 [2024-11-04 16:14:30.063109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:11.416 [2024-11-04 16:14:30.063120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:11.416 [2024-11-04 16:14:30.063134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:11.416 [2024-11-04 16:14:30.063152] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:11.416 [2024-11-04 16:14:30.063170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.416 [2024-11-04 16:14:30.063184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:11.416 [2024-11-04 16:14:30.063200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:11.416 [2024-11-04 16:14:30.063212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:11.416 [2024-11-04 16:14:30.063227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:11.416 [2024-11-04 16:14:30.063240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:11.416 [2024-11-04 16:14:30.063255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:11.416 [2024-11-04 16:14:30.063267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:11.416 [2024-11-04 16:14:30.063282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:11.416 [2024-11-04 16:14:30.063295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:11.416 [2024-11-04 16:14:30.063313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:11.416 [2024-11-04 16:14:30.063325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:11.416 [2024-11-04 16:14:30.063340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:11.416 [2024-11-04 16:14:30.063353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:11.416 [2024-11-04 16:14:30.063368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:11.416 [2024-11-04 16:14:30.063380] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:11.416 [2024-11-04 16:14:30.063405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.416 [2024-11-04 16:14:30.063418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:11.416 [2024-11-04 16:14:30.063433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:11.416 [2024-11-04 16:14:30.063446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:11.416 [2024-11-04 16:14:30.063462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:11.416 [2024-11-04 16:14:30.063476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.416 [2024-11-04 16:14:30.063491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:11.416 [2024-11-04 16:14:30.063504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:20:11.416 [2024-11-04 16:14:30.063519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.416 [2024-11-04 16:14:30.063696] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:11.417 [2024-11-04 16:14:30.063717] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:15.609 [2024-11-04 16:14:33.529117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.609 [2024-11-04 16:14:33.529826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:15.609 [2024-11-04 16:14:33.529955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3471.044 ms 00:20:15.609 [2024-11-04 16:14:33.530029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.609 [2024-11-04 16:14:33.566197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.609 [2024-11-04 16:14:33.566486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.609 [2024-11-04 16:14:33.566739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.776 ms 00:20:15.609 [2024-11-04 16:14:33.566869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.609 [2024-11-04 16:14:33.567342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.609 [2024-11-04 16:14:33.567430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:15.609 [2024-11-04 16:14:33.567492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:15.609 [2024-11-04 16:14:33.567678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.609 [2024-11-04 16:14:33.619765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.609 [2024-11-04 16:14:33.620021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.609 [2024-11-04 16:14:33.620237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.986 ms 00:20:15.609 [2024-11-04 16:14:33.620374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.609 [2024-11-04 16:14:33.620801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.609 [2024-11-04 16:14:33.621025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.609 [2024-11-04 16:14:33.621226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:15.609 [2024-11-04 16:14:33.621422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.609 [2024-11-04 16:14:33.622058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.609 [2024-11-04 16:14:33.622289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.609 [2024-11-04 16:14:33.622472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:20:15.609 [2024-11-04 16:14:33.622725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.609 [2024-11-04 16:14:33.623039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.609 [2024-11-04 16:14:33.623100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.609 [2024-11-04 16:14:33.623316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:15.609 [2024-11-04 16:14:33.623378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.609 [2024-11-04 16:14:33.644813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.609 [2024-11-04 16:14:33.644996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.609 [2024-11-04 16:14:33.645078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.340 ms 00:20:15.609 [2024-11-04 16:14:33.645124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.609 [2024-11-04 16:14:33.656591] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:15.610 [2024-11-04 16:14:33.673295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.610 [2024-11-04 16:14:33.673515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:15.610 [2024-11-04 16:14:33.673606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.018 ms 00:20:15.610 [2024-11-04 16:14:33.673647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.610 [2024-11-04 16:14:33.772096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.610 [2024-11-04 16:14:33.772236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:15.610 [2024-11-04 16:14:33.772291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.403 ms 00:20:15.610 [2024-11-04 16:14:33.772329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.610 [2024-11-04 16:14:33.772615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.610 [2024-11-04 16:14:33.772730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:15.610 [2024-11-04 16:14:33.772814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:20:15.610 [2024-11-04 16:14:33.772851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.610 [2024-11-04 16:14:33.809586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.610 [2024-11-04 16:14:33.809739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:15.610 [2024-11-04 16:14:33.809847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.634 ms 00:20:15.610 [2024-11-04 16:14:33.809889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.610 [2024-11-04 16:14:33.845617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.610 [2024-11-04 16:14:33.845775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:15.610 [2024-11-04 16:14:33.845888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.642 ms 00:20:15.610 [2024-11-04 16:14:33.845926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.610 [2024-11-04 16:14:33.846814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.610 [2024-11-04 16:14:33.846952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:15.610 [2024-11-04 16:14:33.846980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:20:15.610 [2024-11-04 16:14:33.846994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.610 [2024-11-04 16:14:33.945856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.610 [2024-11-04 16:14:33.945913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:15.610 [2024-11-04 16:14:33.945941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.937 ms 00:20:15.610 [2024-11-04 16:14:33.945954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.610 [2024-11-04 16:14:33.982414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.610 [2024-11-04 16:14:33.982470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:15.610 [2024-11-04 16:14:33.982490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.323 ms 00:20:15.610 [2024-11-04 16:14:33.982528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.610 [2024-11-04 16:14:34.018358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.610 [2024-11-04 16:14:34.018399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:15.610 [2024-11-04 16:14:34.018419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.745 ms 00:20:15.610 [2024-11-04 16:14:34.018447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.610 [2024-11-04 16:14:34.054396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.610 [2024-11-04 16:14:34.054439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:15.610 [2024-11-04 16:14:34.054458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.860 ms 00:20:15.610 [2024-11-04 16:14:34.054511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.610 [2024-11-04 16:14:34.054638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.610 [2024-11-04 16:14:34.054657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:15.610 [2024-11-04 16:14:34.054676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:15.610 [2024-11-04 16:14:34.054688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.610 [2024-11-04 16:14:34.054812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.610 [2024-11-04 16:14:34.054828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:15.610 [2024-11-04 16:14:34.054844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:15.610 [2024-11-04 16:14:34.054856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.610 [2024-11-04 16:14:34.056046] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:15.610 [2024-11-04 16:14:34.060333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4029.794 ms, result 0 00:20:15.610 [2024-11-04 16:14:34.061472] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:15.610 { 00:20:15.610 "name": "ftl0", 00:20:15.610 "uuid": "84bc5547-bd10-4723-8e79-2ff33cc227b9" 00:20:15.610 } 00:20:15.610 16:14:34 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:15.610 16:14:34 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:20:15.610 16:14:34 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:15.610 16:14:34 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:20:15.610 16:14:34 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:15.610 16:14:34 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:15.610 16:14:34 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:15.610 16:14:34 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:15.869 [ 00:20:15.869 { 00:20:15.869 "name": "ftl0", 00:20:15.869 "aliases": [ 00:20:15.869 "84bc5547-bd10-4723-8e79-2ff33cc227b9" 00:20:15.869 ], 00:20:15.869 "product_name": "FTL disk", 00:20:15.869 "block_size": 4096, 00:20:15.869 "num_blocks": 23592960, 00:20:15.869 "uuid": "84bc5547-bd10-4723-8e79-2ff33cc227b9", 00:20:15.869 "assigned_rate_limits": { 00:20:15.869 "rw_ios_per_sec": 0, 00:20:15.869 "rw_mbytes_per_sec": 0, 00:20:15.869 "r_mbytes_per_sec": 0, 00:20:15.869 "w_mbytes_per_sec": 0 00:20:15.869 }, 00:20:15.869 "claimed": false, 00:20:15.869 "zoned": false, 00:20:15.869 "supported_io_types": { 00:20:15.869 "read": true, 00:20:15.869 "write": true, 00:20:15.869 "unmap": true, 00:20:15.869 "flush": true, 00:20:15.869 "reset": false, 00:20:15.869 "nvme_admin": false, 00:20:15.869 "nvme_io": false, 00:20:15.869 "nvme_io_md": false, 00:20:15.869 "write_zeroes": true, 00:20:15.869 "zcopy": false, 00:20:15.869 "get_zone_info": false, 00:20:15.869 "zone_management": false, 00:20:15.869 "zone_append": false, 00:20:15.869 "compare": false, 00:20:15.869 "compare_and_write": false, 00:20:15.869 "abort": false, 00:20:15.869 "seek_hole": false, 00:20:15.869 "seek_data": false, 00:20:15.869 "copy": false, 00:20:15.869 "nvme_iov_md": false 00:20:15.869 }, 00:20:15.869 "driver_specific": { 00:20:15.869 "ftl": { 00:20:15.869 "base_bdev": "008b8946-8b30-4818-bde7-80abe5ebfafe", 00:20:15.869 "cache": "nvc0n1p0" 00:20:15.869 } 00:20:15.869 } 00:20:15.869 } 00:20:15.869 ] 00:20:15.869 16:14:34 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:20:15.869 16:14:34 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:15.869 16:14:34 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:16.128 16:14:34 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:16.128 16:14:34 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:16.386 16:14:34 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:16.386 { 00:20:16.386 "name": "ftl0", 00:20:16.386 "aliases": [ 00:20:16.386 "84bc5547-bd10-4723-8e79-2ff33cc227b9" 00:20:16.386 ], 00:20:16.386 "product_name": "FTL disk", 00:20:16.386 "block_size": 4096, 00:20:16.386 "num_blocks": 23592960, 00:20:16.386 "uuid": "84bc5547-bd10-4723-8e79-2ff33cc227b9", 00:20:16.386 "assigned_rate_limits": { 00:20:16.386 "rw_ios_per_sec": 0, 00:20:16.386 "rw_mbytes_per_sec": 0, 00:20:16.386 "r_mbytes_per_sec": 0, 00:20:16.386 "w_mbytes_per_sec": 0 00:20:16.386 }, 00:20:16.386 "claimed": false, 00:20:16.386 "zoned": false, 00:20:16.386 "supported_io_types": { 00:20:16.386 "read": true, 00:20:16.386 "write": true, 00:20:16.386 "unmap": true, 00:20:16.386 "flush": true, 00:20:16.386 "reset": false, 00:20:16.386 "nvme_admin": false, 00:20:16.386 "nvme_io": false, 00:20:16.386 "nvme_io_md": false, 00:20:16.386 "write_zeroes": true, 00:20:16.386 "zcopy": false, 00:20:16.386 "get_zone_info": false, 00:20:16.386 "zone_management": false, 00:20:16.386 "zone_append": false, 00:20:16.386 "compare": false, 00:20:16.386 "compare_and_write": false, 00:20:16.386 "abort": false, 00:20:16.386 "seek_hole": false, 00:20:16.386 "seek_data": false, 00:20:16.386 "copy": false, 00:20:16.386 "nvme_iov_md": false 00:20:16.386 }, 00:20:16.386 "driver_specific": { 00:20:16.386 "ftl": { 00:20:16.386 "base_bdev": "008b8946-8b30-4818-bde7-80abe5ebfafe", 00:20:16.386 "cache": "nvc0n1p0" 00:20:16.386 } 00:20:16.386 } 00:20:16.386 } 00:20:16.386 ]' 00:20:16.386 16:14:34 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:16.386 16:14:34 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:16.386 16:14:34 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:16.386 [2024-11-04 16:14:35.079945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.386 [2024-11-04 16:14:35.080149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:16.386 [2024-11-04 16:14:35.080181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:16.386 [2024-11-04 16:14:35.080202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.386 [2024-11-04 16:14:35.080284] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:16.386 [2024-11-04 16:14:35.084574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.386 [2024-11-04 16:14:35.084611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:16.386 [2024-11-04 16:14:35.084633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.268 ms 00:20:16.386 [2024-11-04 16:14:35.084646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.386 [2024-11-04 16:14:35.085795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.386 [2024-11-04 16:14:35.085824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:16.386 [2024-11-04 16:14:35.085841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:20:16.386 [2024-11-04 16:14:35.085854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.386 [2024-11-04 16:14:35.088697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.386 [2024-11-04 16:14:35.088727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:16.386 [2024-11-04 16:14:35.088754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.790 ms 00:20:16.386 [2024-11-04 16:14:35.088767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.386 [2024-11-04 16:14:35.094420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.386 [2024-11-04 16:14:35.094461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:16.386 [2024-11-04 16:14:35.094480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.565 ms 00:20:16.386 [2024-11-04 16:14:35.094517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.646 [2024-11-04 16:14:35.131422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.646 [2024-11-04 16:14:35.131487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:16.646 [2024-11-04 16:14:35.131531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.842 ms 00:20:16.646 [2024-11-04 16:14:35.131544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.646 [2024-11-04 16:14:35.153735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.646 [2024-11-04 16:14:35.153793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:16.646 [2024-11-04 16:14:35.153831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.103 ms 00:20:16.646 [2024-11-04 16:14:35.153847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.646 [2024-11-04 16:14:35.154192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.646 [2024-11-04 16:14:35.154209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:16.646 [2024-11-04 16:14:35.154226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:20:16.646 [2024-11-04 16:14:35.154238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.646 [2024-11-04 16:14:35.190733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.646 [2024-11-04 16:14:35.190784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:16.646 [2024-11-04 16:14:35.190804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.488 ms 00:20:16.646 [2024-11-04 16:14:35.190816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.646 [2024-11-04 16:14:35.227074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.646 [2024-11-04 16:14:35.227115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:16.646 [2024-11-04 16:14:35.227137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.174 ms 00:20:16.646 [2024-11-04 16:14:35.227148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.646 [2024-11-04 16:14:35.263115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.646 [2024-11-04 16:14:35.263159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:16.646 [2024-11-04 16:14:35.263180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.906 ms 00:20:16.646 [2024-11-04 16:14:35.263192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.646 [2024-11-04 16:14:35.298835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.646 [2024-11-04 16:14:35.298877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:16.646 [2024-11-04 16:14:35.298897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.477 ms 00:20:16.646 [2024-11-04 16:14:35.298909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.646 [2024-11-04 16:14:35.299052] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:16.646 [2024-11-04 16:14:35.299071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:16.646 [2024-11-04 16:14:35.299934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.299947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.299963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.299975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.299991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:16.647 [2024-11-04 16:14:35.300732] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:16.647 [2024-11-04 16:14:35.300754] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84bc5547-bd10-4723-8e79-2ff33cc227b9 00:20:16.647 [2024-11-04 16:14:35.300767] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:16.647 [2024-11-04 16:14:35.300793] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:16.647 [2024-11-04 16:14:35.300805] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:16.647 [2024-11-04 16:14:35.300821] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:16.647 [2024-11-04 16:14:35.300836] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:16.647 [2024-11-04 16:14:35.300851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:16.647 [2024-11-04 16:14:35.300864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:16.647 [2024-11-04 16:14:35.300878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:16.647 [2024-11-04 16:14:35.300889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:16.647 [2024-11-04 16:14:35.300905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.647 [2024-11-04 16:14:35.300917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:16.647 [2024-11-04 16:14:35.300933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:20:16.647 [2024-11-04 16:14:35.300945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.647 [2024-11-04 16:14:35.320950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.647 [2024-11-04 16:14:35.320988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:16.647 [2024-11-04 16:14:35.321014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.964 ms 00:20:16.647 [2024-11-04 16:14:35.321025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.647 [2024-11-04 16:14:35.321634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.647 [2024-11-04 16:14:35.321652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:16.647 [2024-11-04 16:14:35.321669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:20:16.647 [2024-11-04 16:14:35.321681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.906 [2024-11-04 16:14:35.389833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.906 [2024-11-04 16:14:35.389880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.906 [2024-11-04 16:14:35.389898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.906 [2024-11-04 16:14:35.389928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.906 [2024-11-04 16:14:35.390074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.906 [2024-11-04 16:14:35.390088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.906 [2024-11-04 16:14:35.390104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.906 [2024-11-04 16:14:35.390116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.906 [2024-11-04 16:14:35.390211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.906 [2024-11-04 16:14:35.390227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.906 [2024-11-04 16:14:35.390250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.906 [2024-11-04 16:14:35.390262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.906 [2024-11-04 16:14:35.390328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.906 [2024-11-04 16:14:35.390341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.906 [2024-11-04 16:14:35.390357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.906 [2024-11-04 16:14:35.390369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.906 [2024-11-04 16:14:35.518292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.906 [2024-11-04 16:14:35.518361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.906 [2024-11-04 16:14:35.518382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.906 [2024-11-04 16:14:35.518395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.906 [2024-11-04 16:14:35.618244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.906 [2024-11-04 16:14:35.618536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.906 [2024-11-04 16:14:35.618569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.906 [2024-11-04 16:14:35.618583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.906 [2024-11-04 16:14:35.618771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.906 [2024-11-04 16:14:35.618787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.906 [2024-11-04 16:14:35.618827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.906 [2024-11-04 16:14:35.618844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.906 [2024-11-04 16:14:35.618940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.906 [2024-11-04 16:14:35.618954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.906 [2024-11-04 16:14:35.618970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.906 [2024-11-04 16:14:35.618983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.906 [2024-11-04 16:14:35.619141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.906 [2024-11-04 16:14:35.619156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.906 [2024-11-04 16:14:35.619174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.906 [2024-11-04 16:14:35.619186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.906 [2024-11-04 16:14:35.619279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.906 [2024-11-04 16:14:35.619294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:16.906 [2024-11-04 16:14:35.619310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.906 [2024-11-04 16:14:35.619322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.906 [2024-11-04 16:14:35.619412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.906 [2024-11-04 16:14:35.619426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.906 [2024-11-04 16:14:35.619445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.906 [2024-11-04 16:14:35.619458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.906 [2024-11-04 16:14:35.619546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.906 [2024-11-04 16:14:35.619560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.906 [2024-11-04 16:14:35.619575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.906 [2024-11-04 16:14:35.619587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.906 [2024-11-04 16:14:35.619878] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 540.802 ms, result 0 00:20:16.906 true 00:20:17.165 16:14:35 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75384 00:20:17.165 16:14:35 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75384 ']' 00:20:17.165 16:14:35 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75384 00:20:17.165 16:14:35 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:20:17.165 16:14:35 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:17.165 16:14:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75384 00:20:17.165 killing process with pid 75384 00:20:17.165 16:14:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:17.165 16:14:35 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:17.165 16:14:35 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75384' 00:20:17.165 16:14:35 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75384 00:20:17.165 16:14:35 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75384 00:20:20.455 16:14:38 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:20.713 65536+0 records in 00:20:20.713 65536+0 records out 00:20:20.713 268435456 bytes (268 MB, 256 MiB) copied, 0.928716 s, 289 MB/s 00:20:20.713 16:14:39 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:20.971 [2024-11-04 16:14:39.442376] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:20:20.971 [2024-11-04 16:14:39.442662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75583 ] 00:20:20.971 [2024-11-04 16:14:39.619855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.231 [2024-11-04 16:14:39.724355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.490 [2024-11-04 16:14:40.067669] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:21.490 [2024-11-04 16:14:40.067762] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:21.750 [2024-11-04 16:14:40.230489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.750 [2024-11-04 16:14:40.230550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:21.750 [2024-11-04 16:14:40.230567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:21.750 [2024-11-04 16:14:40.230597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.750 [2024-11-04 16:14:40.233637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.750 [2024-11-04 16:14:40.233808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.750 [2024-11-04 16:14:40.233848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.020 ms 00:20:21.750 [2024-11-04 16:14:40.233860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.750 [2024-11-04 16:14:40.234093] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:21.750 [2024-11-04 16:14:40.235113] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:21.750 [2024-11-04 16:14:40.235154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.750 [2024-11-04 16:14:40.235168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.750 [2024-11-04 16:14:40.235181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:20:21.750 [2024-11-04 16:14:40.235193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.750 [2024-11-04 16:14:40.236764] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:21.750 [2024-11-04 16:14:40.254809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.750 [2024-11-04 16:14:40.254864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:21.750 [2024-11-04 16:14:40.254880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.074 ms 00:20:21.750 [2024-11-04 16:14:40.254909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.750 [2024-11-04 16:14:40.255035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.750 [2024-11-04 16:14:40.255050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:21.750 [2024-11-04 16:14:40.255062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:21.750 [2024-11-04 16:14:40.255073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.750 [2024-11-04 16:14:40.261984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.750 [2024-11-04 16:14:40.262014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.750 [2024-11-04 16:14:40.262027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.878 ms 00:20:21.750 [2024-11-04 16:14:40.262037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.750 [2024-11-04 16:14:40.262137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.750 [2024-11-04 16:14:40.262152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.750 [2024-11-04 16:14:40.262164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:21.750 [2024-11-04 16:14:40.262175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.750 [2024-11-04 16:14:40.262206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.750 [2024-11-04 16:14:40.262222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:21.750 [2024-11-04 16:14:40.262234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:21.750 [2024-11-04 16:14:40.262244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.750 [2024-11-04 16:14:40.262269] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:21.750 [2024-11-04 16:14:40.267140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.750 [2024-11-04 16:14:40.267175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.750 [2024-11-04 16:14:40.267189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.884 ms 00:20:21.750 [2024-11-04 16:14:40.267201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.750 [2024-11-04 16:14:40.267274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.750 [2024-11-04 16:14:40.267288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:21.750 [2024-11-04 16:14:40.267300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:21.750 [2024-11-04 16:14:40.267312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.750 [2024-11-04 16:14:40.267338] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:21.750 [2024-11-04 16:14:40.267368] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:21.750 [2024-11-04 16:14:40.267403] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:21.750 [2024-11-04 16:14:40.267423] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:21.750 [2024-11-04 16:14:40.267511] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:21.750 [2024-11-04 16:14:40.267525] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:21.750 [2024-11-04 16:14:40.267540] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:21.750 [2024-11-04 16:14:40.267554] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:21.750 [2024-11-04 16:14:40.267573] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:21.750 [2024-11-04 16:14:40.267586] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:21.750 [2024-11-04 16:14:40.267598] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:21.750 [2024-11-04 16:14:40.267609] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:21.750 [2024-11-04 16:14:40.267620] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:21.750 [2024-11-04 16:14:40.267632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.750 [2024-11-04 16:14:40.267644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:21.750 [2024-11-04 16:14:40.267656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:20:21.750 [2024-11-04 16:14:40.267668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.750 [2024-11-04 16:14:40.267745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.750 [2024-11-04 16:14:40.267780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:21.750 [2024-11-04 16:14:40.267797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:21.750 [2024-11-04 16:14:40.267808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.750 [2024-11-04 16:14:40.267896] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:21.750 [2024-11-04 16:14:40.267911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:21.750 [2024-11-04 16:14:40.267923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:21.750 [2024-11-04 16:14:40.267935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.750 [2024-11-04 16:14:40.267947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:21.751 [2024-11-04 16:14:40.267958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:21.751 [2024-11-04 16:14:40.267969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:21.751 [2024-11-04 16:14:40.267981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:21.751 [2024-11-04 16:14:40.267992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:21.751 [2024-11-04 16:14:40.268003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:21.751 [2024-11-04 16:14:40.268014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:21.751 [2024-11-04 16:14:40.268024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:21.751 [2024-11-04 16:14:40.268035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:21.751 [2024-11-04 16:14:40.268059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:21.751 [2024-11-04 16:14:40.268071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:21.751 [2024-11-04 16:14:40.268082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.751 [2024-11-04 16:14:40.268093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:21.751 [2024-11-04 16:14:40.268103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:21.751 [2024-11-04 16:14:40.268114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.751 [2024-11-04 16:14:40.268125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:21.751 [2024-11-04 16:14:40.268136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:21.751 [2024-11-04 16:14:40.268147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.751 [2024-11-04 16:14:40.268157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:21.751 [2024-11-04 16:14:40.268169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:21.751 [2024-11-04 16:14:40.268179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.751 [2024-11-04 16:14:40.268190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:21.751 [2024-11-04 16:14:40.268201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:21.751 [2024-11-04 16:14:40.268211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.751 [2024-11-04 16:14:40.268222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:21.751 [2024-11-04 16:14:40.268233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:21.751 [2024-11-04 16:14:40.268244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.751 [2024-11-04 16:14:40.268254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:21.751 [2024-11-04 16:14:40.268265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:21.751 [2024-11-04 16:14:40.268276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:21.751 [2024-11-04 16:14:40.268287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:21.751 [2024-11-04 16:14:40.268297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:21.751 [2024-11-04 16:14:40.268308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:21.751 [2024-11-04 16:14:40.268318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:21.751 [2024-11-04 16:14:40.268329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:21.751 [2024-11-04 16:14:40.268339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.751 [2024-11-04 16:14:40.268350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:21.751 [2024-11-04 16:14:40.268360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:21.751 [2024-11-04 16:14:40.268373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.751 [2024-11-04 16:14:40.268385] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:21.751 [2024-11-04 16:14:40.268397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:21.751 [2024-11-04 16:14:40.268408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:21.751 [2024-11-04 16:14:40.268425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.751 [2024-11-04 16:14:40.268437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:21.751 [2024-11-04 16:14:40.268449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:21.751 [2024-11-04 16:14:40.268460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:21.751 [2024-11-04 16:14:40.268471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:21.751 [2024-11-04 16:14:40.268481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:21.751 [2024-11-04 16:14:40.268493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:21.751 [2024-11-04 16:14:40.268505] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:21.751 [2024-11-04 16:14:40.268519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:21.751 [2024-11-04 16:14:40.268532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:21.751 [2024-11-04 16:14:40.268544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:21.751 [2024-11-04 16:14:40.268556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:21.751 [2024-11-04 16:14:40.268568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:21.751 [2024-11-04 16:14:40.268580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:21.751 [2024-11-04 16:14:40.268592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:21.751 [2024-11-04 16:14:40.268603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:21.751 [2024-11-04 16:14:40.268615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:21.751 [2024-11-04 16:14:40.268627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:21.751 [2024-11-04 16:14:40.268639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:21.751 [2024-11-04 16:14:40.268651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:21.751 [2024-11-04 16:14:40.268663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:21.751 [2024-11-04 16:14:40.268674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:21.751 [2024-11-04 16:14:40.268687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:21.751 [2024-11-04 16:14:40.268698] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:21.751 [2024-11-04 16:14:40.268711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:21.751 [2024-11-04 16:14:40.268724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:21.751 [2024-11-04 16:14:40.268737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:21.751 [2024-11-04 16:14:40.268760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:21.751 [2024-11-04 16:14:40.268773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:21.751 [2024-11-04 16:14:40.268786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.751 [2024-11-04 16:14:40.268802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:21.751 [2024-11-04 16:14:40.268818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:20:21.751 [2024-11-04 16:14:40.268829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.751 [2024-11-04 16:14:40.309717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.751 [2024-11-04 16:14:40.309772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:21.751 [2024-11-04 16:14:40.309788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.886 ms 00:20:21.751 [2024-11-04 16:14:40.309800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.751 [2024-11-04 16:14:40.309945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.751 [2024-11-04 16:14:40.309966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:21.751 [2024-11-04 16:14:40.309979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:21.751 [2024-11-04 16:14:40.309991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.751 [2024-11-04 16:14:40.384315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.751 [2024-11-04 16:14:40.384504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.751 [2024-11-04 16:14:40.384546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.416 ms 00:20:21.751 [2024-11-04 16:14:40.384565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.752 [2024-11-04 16:14:40.384686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.752 [2024-11-04 16:14:40.384701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.752 [2024-11-04 16:14:40.384714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:21.752 [2024-11-04 16:14:40.384726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.752 [2024-11-04 16:14:40.385211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.752 [2024-11-04 16:14:40.385239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.752 [2024-11-04 16:14:40.385253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:20:21.752 [2024-11-04 16:14:40.385270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.752 [2024-11-04 16:14:40.385400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.752 [2024-11-04 16:14:40.385420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.752 [2024-11-04 16:14:40.385433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:21.752 [2024-11-04 16:14:40.385445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.752 [2024-11-04 16:14:40.404310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.752 [2024-11-04 16:14:40.404349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.752 [2024-11-04 16:14:40.404364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.870 ms 00:20:21.752 [2024-11-04 16:14:40.404375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.752 [2024-11-04 16:14:40.421885] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:21.752 [2024-11-04 16:14:40.421929] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:21.752 [2024-11-04 16:14:40.421946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.752 [2024-11-04 16:14:40.421975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:21.752 [2024-11-04 16:14:40.422000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.481 ms 00:20:21.752 [2024-11-04 16:14:40.422011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.752 [2024-11-04 16:14:40.449922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.752 [2024-11-04 16:14:40.449980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:21.752 [2024-11-04 16:14:40.450025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.866 ms 00:20:21.752 [2024-11-04 16:14:40.450038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.752 [2024-11-04 16:14:40.467755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.752 [2024-11-04 16:14:40.467794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:21.752 [2024-11-04 16:14:40.467808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.656 ms 00:20:21.752 [2024-11-04 16:14:40.467819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.011 [2024-11-04 16:14:40.484928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.011 [2024-11-04 16:14:40.485093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:22.011 [2024-11-04 16:14:40.485133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.054 ms 00:20:22.011 [2024-11-04 16:14:40.485146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.011 [2024-11-04 16:14:40.486036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.011 [2024-11-04 16:14:40.486067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:22.011 [2024-11-04 16:14:40.486081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:20:22.011 [2024-11-04 16:14:40.486093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.011 [2024-11-04 16:14:40.568527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.011 [2024-11-04 16:14:40.568595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:22.011 [2024-11-04 16:14:40.568613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.533 ms 00:20:22.011 [2024-11-04 16:14:40.568625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.011 [2024-11-04 16:14:40.578737] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:22.011 [2024-11-04 16:14:40.594217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.011 [2024-11-04 16:14:40.594447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:22.011 [2024-11-04 16:14:40.594490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.527 ms 00:20:22.012 [2024-11-04 16:14:40.594516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.012 [2024-11-04 16:14:40.594640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.012 [2024-11-04 16:14:40.594659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:22.012 [2024-11-04 16:14:40.594672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:22.012 [2024-11-04 16:14:40.594685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.012 [2024-11-04 16:14:40.594740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.012 [2024-11-04 16:14:40.594778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:22.012 [2024-11-04 16:14:40.594792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:22.012 [2024-11-04 16:14:40.594804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.012 [2024-11-04 16:14:40.594841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.012 [2024-11-04 16:14:40.594854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:22.012 [2024-11-04 16:14:40.594871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:22.012 [2024-11-04 16:14:40.594883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.012 [2024-11-04 16:14:40.594922] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:22.012 [2024-11-04 16:14:40.594936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.012 [2024-11-04 16:14:40.594949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:22.012 [2024-11-04 16:14:40.594961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:22.012 [2024-11-04 16:14:40.594973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.012 [2024-11-04 16:14:40.629881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.012 [2024-11-04 16:14:40.629931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:22.012 [2024-11-04 16:14:40.629946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.937 ms 00:20:22.012 [2024-11-04 16:14:40.629958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.012 [2024-11-04 16:14:40.630087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.012 [2024-11-04 16:14:40.630101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:22.012 [2024-11-04 16:14:40.630114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:22.012 [2024-11-04 16:14:40.630125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.012 [2024-11-04 16:14:40.631266] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:22.012 [2024-11-04 16:14:40.635503] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.014 ms, result 0 00:20:22.012 [2024-11-04 16:14:40.636375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:22.012 [2024-11-04 16:14:40.654104] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:22.948  [2024-11-04T16:14:43.048Z] Copying: 23/256 [MB] (23 MBps) [2024-11-04T16:14:43.984Z] Copying: 45/256 [MB] (22 MBps) [2024-11-04T16:14:44.920Z] Copying: 68/256 [MB] (22 MBps) [2024-11-04T16:14:45.857Z] Copying: 91/256 [MB] (22 MBps) [2024-11-04T16:14:46.796Z] Copying: 114/256 [MB] (22 MBps) [2024-11-04T16:14:47.733Z] Copying: 135/256 [MB] (21 MBps) [2024-11-04T16:14:48.670Z] Copying: 157/256 [MB] (21 MBps) [2024-11-04T16:14:50.048Z] Copying: 179/256 [MB] (22 MBps) [2024-11-04T16:14:50.984Z] Copying: 201/256 [MB] (22 MBps) [2024-11-04T16:14:51.921Z] Copying: 224/256 [MB] (22 MBps) [2024-11-04T16:14:52.180Z] Copying: 248/256 [MB] (23 MBps) [2024-11-04T16:14:52.180Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-04 16:14:51.983448] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:33.458 [2024-11-04 16:14:51.997341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.458 [2024-11-04 16:14:51.997514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:33.458 [2024-11-04 16:14:51.997540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:33.458 [2024-11-04 16:14:51.997552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.458 [2024-11-04 16:14:51.997586] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:33.458 [2024-11-04 16:14:52.001550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.458 [2024-11-04 16:14:52.001596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:33.458 [2024-11-04 16:14:52.001610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.951 ms 00:20:33.458 [2024-11-04 16:14:52.001621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.458 [2024-11-04 16:14:52.003616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.458 [2024-11-04 16:14:52.003662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:33.458 [2024-11-04 16:14:52.003677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.967 ms 00:20:33.458 [2024-11-04 16:14:52.003689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.459 [2024-11-04 16:14:52.010338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.459 [2024-11-04 16:14:52.010382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:33.459 [2024-11-04 16:14:52.010419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.638 ms 00:20:33.459 [2024-11-04 16:14:52.010431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.459 [2024-11-04 16:14:52.015865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.459 [2024-11-04 16:14:52.016010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:33.459 [2024-11-04 16:14:52.016032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.400 ms 00:20:33.459 [2024-11-04 16:14:52.016061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.459 [2024-11-04 16:14:52.049757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.459 [2024-11-04 16:14:52.049799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:33.459 [2024-11-04 16:14:52.049813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.679 ms 00:20:33.459 [2024-11-04 16:14:52.049824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.459 [2024-11-04 16:14:52.069921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.459 [2024-11-04 16:14:52.070068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:33.459 [2024-11-04 16:14:52.070116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.072 ms 00:20:33.459 [2024-11-04 16:14:52.070132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.459 [2024-11-04 16:14:52.070262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.459 [2024-11-04 16:14:52.070276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:33.459 [2024-11-04 16:14:52.070288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:33.459 [2024-11-04 16:14:52.070300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.459 [2024-11-04 16:14:52.104786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.459 [2024-11-04 16:14:52.104828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:33.459 [2024-11-04 16:14:52.104843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.522 ms 00:20:33.459 [2024-11-04 16:14:52.104854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.459 [2024-11-04 16:14:52.138974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.459 [2024-11-04 16:14:52.139141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:33.459 [2024-11-04 16:14:52.139164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.117 ms 00:20:33.459 [2024-11-04 16:14:52.139175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.459 [2024-11-04 16:14:52.174376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.459 [2024-11-04 16:14:52.174561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:33.459 [2024-11-04 16:14:52.174584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.201 ms 00:20:33.459 [2024-11-04 16:14:52.174595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.719 [2024-11-04 16:14:52.209742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.719 [2024-11-04 16:14:52.209797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:33.719 [2024-11-04 16:14:52.209812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.118 ms 00:20:33.719 [2024-11-04 16:14:52.209824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.719 [2024-11-04 16:14:52.209885] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:33.719 [2024-11-04 16:14:52.209911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.209925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.209938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.209951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.209963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.209975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.209987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.209999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:33.719 [2024-11-04 16:14:52.210192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.210996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:33.720 [2024-11-04 16:14:52.211211] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:33.720 [2024-11-04 16:14:52.211222] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84bc5547-bd10-4723-8e79-2ff33cc227b9 00:20:33.720 [2024-11-04 16:14:52.211234] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:33.720 [2024-11-04 16:14:52.211246] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:33.720 [2024-11-04 16:14:52.211257] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:33.720 [2024-11-04 16:14:52.211268] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:33.720 [2024-11-04 16:14:52.211280] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:33.720 [2024-11-04 16:14:52.211291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:33.720 [2024-11-04 16:14:52.211303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:33.720 [2024-11-04 16:14:52.211313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:33.720 [2024-11-04 16:14:52.211324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:33.720 [2024-11-04 16:14:52.211335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.720 [2024-11-04 16:14:52.211347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:33.720 [2024-11-04 16:14:52.211364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:20:33.720 [2024-11-04 16:14:52.211375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.720 [2024-11-04 16:14:52.230691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.720 [2024-11-04 16:14:52.230732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:33.721 [2024-11-04 16:14:52.230768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.323 ms 00:20:33.721 [2024-11-04 16:14:52.230782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.721 [2024-11-04 16:14:52.231332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.721 [2024-11-04 16:14:52.231364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:33.721 [2024-11-04 16:14:52.231378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:20:33.721 [2024-11-04 16:14:52.231389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.721 [2024-11-04 16:14:52.285179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.721 [2024-11-04 16:14:52.285218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:33.721 [2024-11-04 16:14:52.285233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.721 [2024-11-04 16:14:52.285245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.721 [2024-11-04 16:14:52.285324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.721 [2024-11-04 16:14:52.285340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:33.721 [2024-11-04 16:14:52.285352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.721 [2024-11-04 16:14:52.285364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.721 [2024-11-04 16:14:52.285423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.721 [2024-11-04 16:14:52.285437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:33.721 [2024-11-04 16:14:52.285449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.721 [2024-11-04 16:14:52.285460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.721 [2024-11-04 16:14:52.285481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.721 [2024-11-04 16:14:52.285493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:33.721 [2024-11-04 16:14:52.285510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.721 [2024-11-04 16:14:52.285522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.721 [2024-11-04 16:14:52.404420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.721 [2024-11-04 16:14:52.404473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.721 [2024-11-04 16:14:52.404488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.721 [2024-11-04 16:14:52.404500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.980 [2024-11-04 16:14:52.499392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.980 [2024-11-04 16:14:52.499444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.980 [2024-11-04 16:14:52.499466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.980 [2024-11-04 16:14:52.499477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.980 [2024-11-04 16:14:52.499545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.980 [2024-11-04 16:14:52.499559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.980 [2024-11-04 16:14:52.499570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.980 [2024-11-04 16:14:52.499581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.980 [2024-11-04 16:14:52.499611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.980 [2024-11-04 16:14:52.499623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.980 [2024-11-04 16:14:52.499635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.980 [2024-11-04 16:14:52.499650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.980 [2024-11-04 16:14:52.499802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.980 [2024-11-04 16:14:52.499817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.980 [2024-11-04 16:14:52.499830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.980 [2024-11-04 16:14:52.499842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.980 [2024-11-04 16:14:52.499887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.980 [2024-11-04 16:14:52.499900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:33.980 [2024-11-04 16:14:52.499929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.980 [2024-11-04 16:14:52.499941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.980 [2024-11-04 16:14:52.499990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.980 [2024-11-04 16:14:52.500003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.980 [2024-11-04 16:14:52.500015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.980 [2024-11-04 16:14:52.500026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.980 [2024-11-04 16:14:52.500074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.980 [2024-11-04 16:14:52.500087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.980 [2024-11-04 16:14:52.500099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.980 [2024-11-04 16:14:52.500115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.980 [2024-11-04 16:14:52.500298] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 503.754 ms, result 0 00:20:35.357 00:20:35.357 00:20:35.357 16:14:53 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=75729 00:20:35.357 16:14:53 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:35.357 16:14:53 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 75729 00:20:35.357 16:14:53 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75729 ']' 00:20:35.357 16:14:53 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.357 16:14:53 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:35.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.357 16:14:53 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.357 16:14:53 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:35.357 16:14:53 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:35.357 [2024-11-04 16:14:53.800814] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:20:35.357 [2024-11-04 16:14:53.800943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75729 ] 00:20:35.357 [2024-11-04 16:14:53.979464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.616 [2024-11-04 16:14:54.085571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.554 16:14:54 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:36.554 16:14:54 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:20:36.554 16:14:54 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:36.554 [2024-11-04 16:14:55.104039] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.554 [2024-11-04 16:14:55.104105] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.815 [2024-11-04 16:14:55.283088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.815 [2024-11-04 16:14:55.283331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:36.815 [2024-11-04 16:14:55.283381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:36.815 [2024-11-04 16:14:55.283394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.815 [2024-11-04 16:14:55.286674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.815 [2024-11-04 16:14:55.286857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.815 [2024-11-04 16:14:55.286887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.254 ms 00:20:36.815 [2024-11-04 16:14:55.286900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.815 [2024-11-04 16:14:55.287063] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:36.815 [2024-11-04 16:14:55.288077] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:36.815 [2024-11-04 16:14:55.288117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.815 [2024-11-04 16:14:55.288130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.815 [2024-11-04 16:14:55.288145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:20:36.815 [2024-11-04 16:14:55.288157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.815 [2024-11-04 16:14:55.289705] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:36.815 [2024-11-04 16:14:55.307475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.815 [2024-11-04 16:14:55.307525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:36.815 [2024-11-04 16:14:55.307567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.804 ms 00:20:36.815 [2024-11-04 16:14:55.307582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.815 [2024-11-04 16:14:55.307687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.815 [2024-11-04 16:14:55.307706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:36.815 [2024-11-04 16:14:55.307719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:36.815 [2024-11-04 16:14:55.307733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.815 [2024-11-04 16:14:55.314733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.815 [2024-11-04 16:14:55.314803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.815 [2024-11-04 16:14:55.314817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.923 ms 00:20:36.815 [2024-11-04 16:14:55.314832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.815 [2024-11-04 16:14:55.314947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.815 [2024-11-04 16:14:55.314967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.815 [2024-11-04 16:14:55.314980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:36.815 [2024-11-04 16:14:55.315000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.815 [2024-11-04 16:14:55.315033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.815 [2024-11-04 16:14:55.315049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:36.815 [2024-11-04 16:14:55.315062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:36.815 [2024-11-04 16:14:55.315076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.815 [2024-11-04 16:14:55.315104] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:36.815 [2024-11-04 16:14:55.319947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.815 [2024-11-04 16:14:55.319982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.815 [2024-11-04 16:14:55.319998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.853 ms 00:20:36.815 [2024-11-04 16:14:55.320027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.815 [2024-11-04 16:14:55.320106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.815 [2024-11-04 16:14:55.320119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:36.815 [2024-11-04 16:14:55.320135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:36.815 [2024-11-04 16:14:55.320150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.815 [2024-11-04 16:14:55.320178] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:36.815 [2024-11-04 16:14:55.320201] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:36.815 [2024-11-04 16:14:55.320248] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:36.815 [2024-11-04 16:14:55.320269] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:36.815 [2024-11-04 16:14:55.320364] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:36.815 [2024-11-04 16:14:55.320380] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:36.815 [2024-11-04 16:14:55.320404] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:36.815 [2024-11-04 16:14:55.320419] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:36.815 [2024-11-04 16:14:55.320436] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:36.815 [2024-11-04 16:14:55.320449] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:36.816 [2024-11-04 16:14:55.320464] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:36.816 [2024-11-04 16:14:55.320476] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:36.816 [2024-11-04 16:14:55.320492] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:36.816 [2024-11-04 16:14:55.320505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-04 16:14:55.320520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:36.816 [2024-11-04 16:14:55.320532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:20:36.816 [2024-11-04 16:14:55.320550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-04 16:14:55.320626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-04 16:14:55.320642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:36.816 [2024-11-04 16:14:55.320654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:36.816 [2024-11-04 16:14:55.320682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-04 16:14:55.320815] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:36.816 [2024-11-04 16:14:55.320854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:36.816 [2024-11-04 16:14:55.320866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.816 [2024-11-04 16:14:55.320884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.816 [2024-11-04 16:14:55.320897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:36.816 [2024-11-04 16:14:55.320914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:36.816 [2024-11-04 16:14:55.320926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:36.816 [2024-11-04 16:14:55.320949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:36.816 [2024-11-04 16:14:55.320962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:36.816 [2024-11-04 16:14:55.320978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.816 [2024-11-04 16:14:55.320990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:36.816 [2024-11-04 16:14:55.321007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:36.816 [2024-11-04 16:14:55.321019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.816 [2024-11-04 16:14:55.321036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:36.816 [2024-11-04 16:14:55.321049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:36.816 [2024-11-04 16:14:55.321066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.816 [2024-11-04 16:14:55.321093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:36.816 [2024-11-04 16:14:55.321111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:36.816 [2024-11-04 16:14:55.321123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.816 [2024-11-04 16:14:55.321141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:36.816 [2024-11-04 16:14:55.321164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:36.816 [2024-11-04 16:14:55.321181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.816 [2024-11-04 16:14:55.321193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:36.816 [2024-11-04 16:14:55.321215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:36.816 [2024-11-04 16:14:55.321228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.816 [2024-11-04 16:14:55.321242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:36.816 [2024-11-04 16:14:55.321253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:36.816 [2024-11-04 16:14:55.321267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.816 [2024-11-04 16:14:55.321278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:36.816 [2024-11-04 16:14:55.321292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:36.816 [2024-11-04 16:14:55.321304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.816 [2024-11-04 16:14:55.321319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:36.816 [2024-11-04 16:14:55.321331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:36.816 [2024-11-04 16:14:55.321344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.816 [2024-11-04 16:14:55.321355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:36.816 [2024-11-04 16:14:55.321369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:36.816 [2024-11-04 16:14:55.321381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.816 [2024-11-04 16:14:55.321395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:36.816 [2024-11-04 16:14:55.321406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:36.816 [2024-11-04 16:14:55.321422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.816 [2024-11-04 16:14:55.321434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:36.816 [2024-11-04 16:14:55.321447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:36.816 [2024-11-04 16:14:55.321459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.816 [2024-11-04 16:14:55.321472] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:36.816 [2024-11-04 16:14:55.321487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:36.816 [2024-11-04 16:14:55.321501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.816 [2024-11-04 16:14:55.321515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.816 [2024-11-04 16:14:55.321530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:36.816 [2024-11-04 16:14:55.321542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:36.816 [2024-11-04 16:14:55.321556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:36.816 [2024-11-04 16:14:55.321568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:36.816 [2024-11-04 16:14:55.321582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:36.816 [2024-11-04 16:14:55.321594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:36.816 [2024-11-04 16:14:55.321610] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:36.816 [2024-11-04 16:14:55.321625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.816 [2024-11-04 16:14:55.321645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:36.816 [2024-11-04 16:14:55.321658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:36.816 [2024-11-04 16:14:55.321673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:36.816 [2024-11-04 16:14:55.321686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:36.816 [2024-11-04 16:14:55.321701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:36.816 [2024-11-04 16:14:55.321713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:36.816 [2024-11-04 16:14:55.321728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:36.816 [2024-11-04 16:14:55.321740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:36.816 [2024-11-04 16:14:55.321767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:36.816 [2024-11-04 16:14:55.321780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:36.816 [2024-11-04 16:14:55.321795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:36.816 [2024-11-04 16:14:55.321808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:36.816 [2024-11-04 16:14:55.321824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:36.816 [2024-11-04 16:14:55.321836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:36.816 [2024-11-04 16:14:55.321851] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:36.816 [2024-11-04 16:14:55.321865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.816 [2024-11-04 16:14:55.321884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:36.816 [2024-11-04 16:14:55.321897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:36.816 [2024-11-04 16:14:55.321912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:36.816 [2024-11-04 16:14:55.321924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:36.816 [2024-11-04 16:14:55.321940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-04 16:14:55.321952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:36.816 [2024-11-04 16:14:55.321967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.204 ms 00:20:36.816 [2024-11-04 16:14:55.321983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-04 16:14:55.362712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-04 16:14:55.362762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.816 [2024-11-04 16:14:55.362798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.728 ms 00:20:36.816 [2024-11-04 16:14:55.362814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-04 16:14:55.362964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.816 [2024-11-04 16:14:55.362979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:36.816 [2024-11-04 16:14:55.362995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:36.816 [2024-11-04 16:14:55.363007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.816 [2024-11-04 16:14:55.406868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.817 [2024-11-04 16:14:55.406913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.817 [2024-11-04 16:14:55.406931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.901 ms 00:20:36.817 [2024-11-04 16:14:55.406960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.817 [2024-11-04 16:14:55.407057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.817 [2024-11-04 16:14:55.407071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.817 [2024-11-04 16:14:55.407087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:36.817 [2024-11-04 16:14:55.407099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.817 [2024-11-04 16:14:55.407554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.817 [2024-11-04 16:14:55.407597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.817 [2024-11-04 16:14:55.407613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:20:36.817 [2024-11-04 16:14:55.407626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.817 [2024-11-04 16:14:55.407766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.817 [2024-11-04 16:14:55.407787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.817 [2024-11-04 16:14:55.407803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:20:36.817 [2024-11-04 16:14:55.407815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.817 [2024-11-04 16:14:55.429568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.817 [2024-11-04 16:14:55.429605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.817 [2024-11-04 16:14:55.429623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.757 ms 00:20:36.817 [2024-11-04 16:14:55.429635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.817 [2024-11-04 16:14:55.450362] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:36.817 [2024-11-04 16:14:55.450404] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:36.817 [2024-11-04 16:14:55.450428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.817 [2024-11-04 16:14:55.450440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:36.817 [2024-11-04 16:14:55.450456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.689 ms 00:20:36.817 [2024-11-04 16:14:55.450467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.817 [2024-11-04 16:14:55.479131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.817 [2024-11-04 16:14:55.479295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:36.817 [2024-11-04 16:14:55.479326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.609 ms 00:20:36.817 [2024-11-04 16:14:55.479339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.817 [2024-11-04 16:14:55.496723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.817 [2024-11-04 16:14:55.496778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:36.817 [2024-11-04 16:14:55.496817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.318 ms 00:20:36.817 [2024-11-04 16:14:55.496829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.817 [2024-11-04 16:14:55.513981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.817 [2024-11-04 16:14:55.514022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:36.817 [2024-11-04 16:14:55.514040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.074 ms 00:20:36.817 [2024-11-04 16:14:55.514051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.817 [2024-11-04 16:14:55.514850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.817 [2024-11-04 16:14:55.514878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:36.817 [2024-11-04 16:14:55.514894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:20:36.817 [2024-11-04 16:14:55.514906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.076 [2024-11-04 16:14:55.610591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-04 16:14:55.610803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:37.077 [2024-11-04 16:14:55.610838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.801 ms 00:20:37.077 [2024-11-04 16:14:55.610852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-04 16:14:55.622043] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:37.077 [2024-11-04 16:14:55.638555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-04 16:14:55.638616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:37.077 [2024-11-04 16:14:55.638632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.596 ms 00:20:37.077 [2024-11-04 16:14:55.638648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-04 16:14:55.638808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-04 16:14:55.638828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:37.077 [2024-11-04 16:14:55.638842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:37.077 [2024-11-04 16:14:55.638858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-04 16:14:55.638916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-04 16:14:55.638932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:37.077 [2024-11-04 16:14:55.638945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:37.077 [2024-11-04 16:14:55.638964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-04 16:14:55.638993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-04 16:14:55.639010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:37.077 [2024-11-04 16:14:55.639023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:37.077 [2024-11-04 16:14:55.639040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-04 16:14:55.639080] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:37.077 [2024-11-04 16:14:55.639107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-04 16:14:55.639126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:37.077 [2024-11-04 16:14:55.639145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:37.077 [2024-11-04 16:14:55.639157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-04 16:14:55.675076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-04 16:14:55.675122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:37.077 [2024-11-04 16:14:55.675145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.928 ms 00:20:37.077 [2024-11-04 16:14:55.675158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-04 16:14:55.675286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.077 [2024-11-04 16:14:55.675301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:37.077 [2024-11-04 16:14:55.675328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:37.077 [2024-11-04 16:14:55.675340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.077 [2024-11-04 16:14:55.676316] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:37.077 [2024-11-04 16:14:55.680635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 393.569 ms, result 0 00:20:37.077 [2024-11-04 16:14:55.681918] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:37.077 Some configs were skipped because the RPC state that can call them passed over. 00:20:37.077 16:14:55 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:37.336 [2024-11-04 16:14:55.929520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.336 [2024-11-04 16:14:55.929711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:37.336 [2024-11-04 16:14:55.929823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:20:37.336 [2024-11-04 16:14:55.929880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.336 [2024-11-04 16:14:55.929965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.189 ms, result 0 00:20:37.336 true 00:20:37.336 16:14:55 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:37.595 [2024-11-04 16:14:56.144912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.595 [2024-11-04 16:14:56.145062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:37.595 [2024-11-04 16:14:56.145157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:20:37.595 [2024-11-04 16:14:56.145201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.595 [2024-11-04 16:14:56.145307] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.633 ms, result 0 00:20:37.595 true 00:20:37.595 16:14:56 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 75729 00:20:37.595 16:14:56 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75729 ']' 00:20:37.595 16:14:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75729 00:20:37.595 16:14:56 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:20:37.595 16:14:56 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:37.595 16:14:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75729 00:20:37.595 killing process with pid 75729 00:20:37.595 16:14:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:37.595 16:14:56 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:37.595 16:14:56 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75729' 00:20:37.595 16:14:56 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75729 00:20:37.595 16:14:56 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75729 00:20:38.974 [2024-11-04 16:14:57.258224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.974 [2024-11-04 16:14:57.258295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:38.974 [2024-11-04 16:14:57.258312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:38.974 [2024-11-04 16:14:57.258326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.974 [2024-11-04 16:14:57.258356] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:38.974 [2024-11-04 16:14:57.262464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.974 [2024-11-04 16:14:57.262501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:38.974 [2024-11-04 16:14:57.262530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.090 ms 00:20:38.974 [2024-11-04 16:14:57.262558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.974 [2024-11-04 16:14:57.262828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.974 [2024-11-04 16:14:57.262843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:38.974 [2024-11-04 16:14:57.262859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:20:38.974 [2024-11-04 16:14:57.262871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.974 [2024-11-04 16:14:57.266250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.974 [2024-11-04 16:14:57.266292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:38.974 [2024-11-04 16:14:57.266314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.357 ms 00:20:38.974 [2024-11-04 16:14:57.266327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.974 [2024-11-04 16:14:57.271828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.974 [2024-11-04 16:14:57.271867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:38.974 [2024-11-04 16:14:57.271884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.462 ms 00:20:38.974 [2024-11-04 16:14:57.271912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.974 [2024-11-04 16:14:57.286620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.974 [2024-11-04 16:14:57.286672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:38.974 [2024-11-04 16:14:57.286709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.660 ms 00:20:38.974 [2024-11-04 16:14:57.286731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.974 [2024-11-04 16:14:57.297308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.974 [2024-11-04 16:14:57.297351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:38.974 [2024-11-04 16:14:57.297369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.498 ms 00:20:38.974 [2024-11-04 16:14:57.297380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.974 [2024-11-04 16:14:57.297523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.974 [2024-11-04 16:14:57.297538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:38.974 [2024-11-04 16:14:57.297552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:38.974 [2024-11-04 16:14:57.297563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.974 [2024-11-04 16:14:57.312731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.974 [2024-11-04 16:14:57.312779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:38.974 [2024-11-04 16:14:57.312797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.166 ms 00:20:38.974 [2024-11-04 16:14:57.312824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.974 [2024-11-04 16:14:57.326642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.974 [2024-11-04 16:14:57.326679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:38.974 [2024-11-04 16:14:57.326717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.776 ms 00:20:38.974 [2024-11-04 16:14:57.326728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.974 [2024-11-04 16:14:57.340550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.974 [2024-11-04 16:14:57.340699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:38.974 [2024-11-04 16:14:57.340746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.769 ms 00:20:38.974 [2024-11-04 16:14:57.340758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.974 [2024-11-04 16:14:57.354532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.974 [2024-11-04 16:14:57.354695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:38.974 [2024-11-04 16:14:57.354741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.688 ms 00:20:38.974 [2024-11-04 16:14:57.354752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.974 [2024-11-04 16:14:57.354854] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:38.974 [2024-11-04 16:14:57.354874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.354897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.354911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.354926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.354939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.354957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.354970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.354986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.354998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:38.974 [2024-11-04 16:14:57.355484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.355993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:38.975 [2024-11-04 16:14:57.356327] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:38.975 [2024-11-04 16:14:57.356344] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84bc5547-bd10-4723-8e79-2ff33cc227b9 00:20:38.975 [2024-11-04 16:14:57.356370] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:38.975 [2024-11-04 16:14:57.356385] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:38.975 [2024-11-04 16:14:57.356397] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:38.975 [2024-11-04 16:14:57.356412] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:38.975 [2024-11-04 16:14:57.356423] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:38.975 [2024-11-04 16:14:57.356438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:38.975 [2024-11-04 16:14:57.356450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:38.975 [2024-11-04 16:14:57.356464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:38.975 [2024-11-04 16:14:57.356475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:38.975 [2024-11-04 16:14:57.356490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.975 [2024-11-04 16:14:57.356502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:38.975 [2024-11-04 16:14:57.356518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:20:38.975 [2024-11-04 16:14:57.356533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.975 [2024-11-04 16:14:57.375830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.975 [2024-11-04 16:14:57.375866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:38.975 [2024-11-04 16:14:57.375902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.298 ms 00:20:38.975 [2024-11-04 16:14:57.375915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.975 [2024-11-04 16:14:57.376463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.975 [2024-11-04 16:14:57.376483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:38.975 [2024-11-04 16:14:57.376509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:20:38.975 [2024-11-04 16:14:57.376521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.975 [2024-11-04 16:14:57.440975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.975 [2024-11-04 16:14:57.441016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:38.975 [2024-11-04 16:14:57.441034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.975 [2024-11-04 16:14:57.441045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.975 [2024-11-04 16:14:57.441133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.975 [2024-11-04 16:14:57.441147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:38.975 [2024-11-04 16:14:57.441165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.975 [2024-11-04 16:14:57.441177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.975 [2024-11-04 16:14:57.441233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.975 [2024-11-04 16:14:57.441247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:38.975 [2024-11-04 16:14:57.441264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.975 [2024-11-04 16:14:57.441276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.975 [2024-11-04 16:14:57.441300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.975 [2024-11-04 16:14:57.441312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:38.975 [2024-11-04 16:14:57.441326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.975 [2024-11-04 16:14:57.441340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.975 [2024-11-04 16:14:57.558479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.975 [2024-11-04 16:14:57.558542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.976 [2024-11-04 16:14:57.558566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.976 [2024-11-04 16:14:57.558578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.976 [2024-11-04 16:14:57.654465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.976 [2024-11-04 16:14:57.654724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.976 [2024-11-04 16:14:57.654793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.976 [2024-11-04 16:14:57.654812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.976 [2024-11-04 16:14:57.654900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.976 [2024-11-04 16:14:57.654915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.976 [2024-11-04 16:14:57.654935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.976 [2024-11-04 16:14:57.654947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.976 [2024-11-04 16:14:57.654983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.976 [2024-11-04 16:14:57.654995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.976 [2024-11-04 16:14:57.655011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.976 [2024-11-04 16:14:57.655023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.976 [2024-11-04 16:14:57.655158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.976 [2024-11-04 16:14:57.655174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.976 [2024-11-04 16:14:57.655190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.976 [2024-11-04 16:14:57.655203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.976 [2024-11-04 16:14:57.655253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.976 [2024-11-04 16:14:57.655267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:38.976 [2024-11-04 16:14:57.655283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.976 [2024-11-04 16:14:57.655295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.976 [2024-11-04 16:14:57.655344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.976 [2024-11-04 16:14:57.655358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.976 [2024-11-04 16:14:57.655376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.976 [2024-11-04 16:14:57.655388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.976 [2024-11-04 16:14:57.655440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.976 [2024-11-04 16:14:57.655454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.976 [2024-11-04 16:14:57.655469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.976 [2024-11-04 16:14:57.655481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.976 [2024-11-04 16:14:57.655631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 398.049 ms, result 0 00:20:39.914 16:14:58 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:39.914 16:14:58 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:40.173 [2024-11-04 16:14:58.713684] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:20:40.173 [2024-11-04 16:14:58.714037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75793 ] 00:20:40.433 [2024-11-04 16:14:58.899279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.433 [2024-11-04 16:14:59.005499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.692 [2024-11-04 16:14:59.349341] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.692 [2024-11-04 16:14:59.349661] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.952 [2024-11-04 16:14:59.512518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.952 [2024-11-04 16:14:59.512571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:40.952 [2024-11-04 16:14:59.512588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:40.952 [2024-11-04 16:14:59.512600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.952 [2024-11-04 16:14:59.515672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.952 [2024-11-04 16:14:59.515942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.952 [2024-11-04 16:14:59.515967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.054 ms 00:20:40.952 [2024-11-04 16:14:59.515980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.952 [2024-11-04 16:14:59.516124] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:40.952 [2024-11-04 16:14:59.517092] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:40.952 [2024-11-04 16:14:59.517130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.952 [2024-11-04 16:14:59.517143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.952 [2024-11-04 16:14:59.517156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:20:40.952 [2024-11-04 16:14:59.517168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.952 [2024-11-04 16:14:59.518765] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:40.953 [2024-11-04 16:14:59.537050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.953 [2024-11-04 16:14:59.537097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:40.953 [2024-11-04 16:14:59.537111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.335 ms 00:20:40.953 [2024-11-04 16:14:59.537123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.953 [2024-11-04 16:14:59.537227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.953 [2024-11-04 16:14:59.537242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:40.953 [2024-11-04 16:14:59.537255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:40.953 [2024-11-04 16:14:59.537266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.953 [2024-11-04 16:14:59.544210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.953 [2024-11-04 16:14:59.544242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.953 [2024-11-04 16:14:59.544256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.911 ms 00:20:40.953 [2024-11-04 16:14:59.544267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.953 [2024-11-04 16:14:59.544368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.953 [2024-11-04 16:14:59.544383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.953 [2024-11-04 16:14:59.544396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:40.953 [2024-11-04 16:14:59.544408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.953 [2024-11-04 16:14:59.544438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.953 [2024-11-04 16:14:59.544454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:40.953 [2024-11-04 16:14:59.544467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:40.953 [2024-11-04 16:14:59.544477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.953 [2024-11-04 16:14:59.544502] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:40.953 [2024-11-04 16:14:59.549420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.953 [2024-11-04 16:14:59.549454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.953 [2024-11-04 16:14:59.549467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.931 ms 00:20:40.953 [2024-11-04 16:14:59.549495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.953 [2024-11-04 16:14:59.549567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.953 [2024-11-04 16:14:59.549581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:40.953 [2024-11-04 16:14:59.549594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:40.953 [2024-11-04 16:14:59.549605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.953 [2024-11-04 16:14:59.549631] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:40.953 [2024-11-04 16:14:59.549661] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:40.953 [2024-11-04 16:14:59.549698] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:40.953 [2024-11-04 16:14:59.549718] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:40.953 [2024-11-04 16:14:59.549825] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:40.953 [2024-11-04 16:14:59.549841] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:40.953 [2024-11-04 16:14:59.549856] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:40.953 [2024-11-04 16:14:59.549871] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:40.953 [2024-11-04 16:14:59.549889] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:40.953 [2024-11-04 16:14:59.549902] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:40.953 [2024-11-04 16:14:59.549913] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:40.953 [2024-11-04 16:14:59.549925] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:40.953 [2024-11-04 16:14:59.549937] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:40.953 [2024-11-04 16:14:59.549949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.953 [2024-11-04 16:14:59.549961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:40.953 [2024-11-04 16:14:59.549973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:20:40.953 [2024-11-04 16:14:59.549984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.953 [2024-11-04 16:14:59.550062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.953 [2024-11-04 16:14:59.550075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:40.953 [2024-11-04 16:14:59.550091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:40.953 [2024-11-04 16:14:59.550103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.953 [2024-11-04 16:14:59.550197] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:40.953 [2024-11-04 16:14:59.550211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:40.953 [2024-11-04 16:14:59.550224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.953 [2024-11-04 16:14:59.550235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.953 [2024-11-04 16:14:59.550247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:40.953 [2024-11-04 16:14:59.550258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:40.953 [2024-11-04 16:14:59.550269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:40.953 [2024-11-04 16:14:59.550280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:40.953 [2024-11-04 16:14:59.550291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:40.953 [2024-11-04 16:14:59.550302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.953 [2024-11-04 16:14:59.550314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:40.953 [2024-11-04 16:14:59.550325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:40.953 [2024-11-04 16:14:59.550336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.953 [2024-11-04 16:14:59.550360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:40.953 [2024-11-04 16:14:59.550370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:40.953 [2024-11-04 16:14:59.550381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.953 [2024-11-04 16:14:59.550394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:40.953 [2024-11-04 16:14:59.550404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:40.953 [2024-11-04 16:14:59.550415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.953 [2024-11-04 16:14:59.550426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:40.953 [2024-11-04 16:14:59.550437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:40.953 [2024-11-04 16:14:59.550448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.953 [2024-11-04 16:14:59.550459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:40.953 [2024-11-04 16:14:59.550470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:40.953 [2024-11-04 16:14:59.550481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.953 [2024-11-04 16:14:59.550491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:40.953 [2024-11-04 16:14:59.550503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:40.953 [2024-11-04 16:14:59.550524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.953 [2024-11-04 16:14:59.550552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:40.953 [2024-11-04 16:14:59.550564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:40.953 [2024-11-04 16:14:59.550574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.953 [2024-11-04 16:14:59.550585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:40.953 [2024-11-04 16:14:59.550596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:40.953 [2024-11-04 16:14:59.550607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.953 [2024-11-04 16:14:59.550618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:40.953 [2024-11-04 16:14:59.550629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:40.953 [2024-11-04 16:14:59.550640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.953 [2024-11-04 16:14:59.550650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:40.953 [2024-11-04 16:14:59.550662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:40.953 [2024-11-04 16:14:59.550673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.953 [2024-11-04 16:14:59.550683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:40.953 [2024-11-04 16:14:59.550694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:40.954 [2024-11-04 16:14:59.550708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.954 [2024-11-04 16:14:59.550719] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:40.954 [2024-11-04 16:14:59.550731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:40.954 [2024-11-04 16:14:59.550742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.954 [2024-11-04 16:14:59.550760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.954 [2024-11-04 16:14:59.550789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:40.954 [2024-11-04 16:14:59.550801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:40.954 [2024-11-04 16:14:59.550812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:40.954 [2024-11-04 16:14:59.550823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:40.954 [2024-11-04 16:14:59.550834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:40.954 [2024-11-04 16:14:59.550845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:40.954 [2024-11-04 16:14:59.550857] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:40.954 [2024-11-04 16:14:59.550871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.954 [2024-11-04 16:14:59.550884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:40.954 [2024-11-04 16:14:59.550897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:40.954 [2024-11-04 16:14:59.550908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:40.954 [2024-11-04 16:14:59.550920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:40.954 [2024-11-04 16:14:59.550932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:40.954 [2024-11-04 16:14:59.550944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:40.954 [2024-11-04 16:14:59.550957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:40.954 [2024-11-04 16:14:59.550970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:40.954 [2024-11-04 16:14:59.550981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:40.954 [2024-11-04 16:14:59.550993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:40.954 [2024-11-04 16:14:59.551005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:40.954 [2024-11-04 16:14:59.551016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:40.954 [2024-11-04 16:14:59.551028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:40.954 [2024-11-04 16:14:59.551040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:40.954 [2024-11-04 16:14:59.551051] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:40.954 [2024-11-04 16:14:59.551065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.954 [2024-11-04 16:14:59.551078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:40.954 [2024-11-04 16:14:59.551090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:40.954 [2024-11-04 16:14:59.551103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:40.954 [2024-11-04 16:14:59.551115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:40.954 [2024-11-04 16:14:59.551128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.954 [2024-11-04 16:14:59.551140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:40.954 [2024-11-04 16:14:59.551157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:20:40.954 [2024-11-04 16:14:59.551169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.954 [2024-11-04 16:14:59.588556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.954 [2024-11-04 16:14:59.588599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.954 [2024-11-04 16:14:59.588614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.389 ms 00:20:40.954 [2024-11-04 16:14:59.588627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.954 [2024-11-04 16:14:59.588744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.954 [2024-11-04 16:14:59.588796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.954 [2024-11-04 16:14:59.588810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:40.954 [2024-11-04 16:14:59.588821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.954 [2024-11-04 16:14:59.653776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.954 [2024-11-04 16:14:59.653819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.954 [2024-11-04 16:14:59.653834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.032 ms 00:20:40.954 [2024-11-04 16:14:59.653850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.954 [2024-11-04 16:14:59.653957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.954 [2024-11-04 16:14:59.653972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.954 [2024-11-04 16:14:59.653984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:40.954 [2024-11-04 16:14:59.653995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.954 [2024-11-04 16:14:59.654442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.954 [2024-11-04 16:14:59.654458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.954 [2024-11-04 16:14:59.654469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:20:40.954 [2024-11-04 16:14:59.654485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.954 [2024-11-04 16:14:59.654629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.954 [2024-11-04 16:14:59.654645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.954 [2024-11-04 16:14:59.654657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:20:40.954 [2024-11-04 16:14:59.654670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.214 [2024-11-04 16:14:59.674038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.214 [2024-11-04 16:14:59.674075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:41.214 [2024-11-04 16:14:59.674090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.374 ms 00:20:41.214 [2024-11-04 16:14:59.674101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.214 [2024-11-04 16:14:59.693093] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:41.214 [2024-11-04 16:14:59.693136] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:41.214 [2024-11-04 16:14:59.693153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.214 [2024-11-04 16:14:59.693165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:41.214 [2024-11-04 16:14:59.693178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.967 ms 00:20:41.214 [2024-11-04 16:14:59.693189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.214 [2024-11-04 16:14:59.720983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.214 [2024-11-04 16:14:59.721039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:41.214 [2024-11-04 16:14:59.721055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.751 ms 00:20:41.214 [2024-11-04 16:14:59.721066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.214 [2024-11-04 16:14:59.738036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.214 [2024-11-04 16:14:59.738075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:41.214 [2024-11-04 16:14:59.738089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.910 ms 00:20:41.214 [2024-11-04 16:14:59.738100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.214 [2024-11-04 16:14:59.755522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.214 [2024-11-04 16:14:59.755685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:41.214 [2024-11-04 16:14:59.755709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.368 ms 00:20:41.214 [2024-11-04 16:14:59.755720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.214 [2024-11-04 16:14:59.756552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.214 [2024-11-04 16:14:59.756587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:41.214 [2024-11-04 16:14:59.756602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:20:41.214 [2024-11-04 16:14:59.756614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.214 [2024-11-04 16:14:59.837865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.214 [2024-11-04 16:14:59.838122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:41.214 [2024-11-04 16:14:59.838167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.349 ms 00:20:41.214 [2024-11-04 16:14:59.838180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.214 [2024-11-04 16:14:59.848625] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:41.214 [2024-11-04 16:14:59.864242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.214 [2024-11-04 16:14:59.864484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:41.214 [2024-11-04 16:14:59.864512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.974 ms 00:20:41.214 [2024-11-04 16:14:59.864525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.214 [2024-11-04 16:14:59.864656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.214 [2024-11-04 16:14:59.864671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:41.214 [2024-11-04 16:14:59.864684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:41.214 [2024-11-04 16:14:59.864696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.214 [2024-11-04 16:14:59.864778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.214 [2024-11-04 16:14:59.864810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:41.214 [2024-11-04 16:14:59.864823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:41.214 [2024-11-04 16:14:59.864835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.214 [2024-11-04 16:14:59.864873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.214 [2024-11-04 16:14:59.864890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:41.214 [2024-11-04 16:14:59.864902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:41.214 [2024-11-04 16:14:59.864915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.214 [2024-11-04 16:14:59.864953] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:41.214 [2024-11-04 16:14:59.864967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.214 [2024-11-04 16:14:59.864979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:41.214 [2024-11-04 16:14:59.864992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:41.214 [2024-11-04 16:14:59.865004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.214 [2024-11-04 16:14:59.899848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.215 [2024-11-04 16:14:59.899892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:41.215 [2024-11-04 16:14:59.899907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.872 ms 00:20:41.215 [2024-11-04 16:14:59.899919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.215 [2024-11-04 16:14:59.900040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.215 [2024-11-04 16:14:59.900055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:41.215 [2024-11-04 16:14:59.900067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:41.215 [2024-11-04 16:14:59.900078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.215 [2024-11-04 16:14:59.901028] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:41.215 [2024-11-04 16:14:59.905086] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 388.816 ms, result 0 00:20:41.215 [2024-11-04 16:14:59.906062] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:41.215 [2024-11-04 16:14:59.924328] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.594  [2024-11-04T16:15:02.254Z] Copying: 28/256 [MB] (28 MBps) [2024-11-04T16:15:03.192Z] Copying: 52/256 [MB] (24 MBps) [2024-11-04T16:15:04.128Z] Copying: 77/256 [MB] (24 MBps) [2024-11-04T16:15:05.065Z] Copying: 100/256 [MB] (23 MBps) [2024-11-04T16:15:06.002Z] Copying: 124/256 [MB] (23 MBps) [2024-11-04T16:15:06.940Z] Copying: 147/256 [MB] (22 MBps) [2024-11-04T16:15:08.319Z] Copying: 170/256 [MB] (22 MBps) [2024-11-04T16:15:09.258Z] Copying: 192/256 [MB] (22 MBps) [2024-11-04T16:15:10.201Z] Copying: 215/256 [MB] (22 MBps) [2024-11-04T16:15:10.777Z] Copying: 239/256 [MB] (23 MBps) [2024-11-04T16:15:10.777Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-04 16:15:10.607382] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:52.055 [2024-11-04 16:15:10.621722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.055 [2024-11-04 16:15:10.621776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:52.055 [2024-11-04 16:15:10.621811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:52.055 [2024-11-04 16:15:10.621831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.055 [2024-11-04 16:15:10.621857] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:52.055 [2024-11-04 16:15:10.626015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.055 [2024-11-04 16:15:10.626044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:52.055 [2024-11-04 16:15:10.626058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.129 ms 00:20:52.055 [2024-11-04 16:15:10.626070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.055 [2024-11-04 16:15:10.626300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.055 [2024-11-04 16:15:10.626318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:52.055 [2024-11-04 16:15:10.626331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:20:52.055 [2024-11-04 16:15:10.626343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.055 [2024-11-04 16:15:10.629196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.055 [2024-11-04 16:15:10.629373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:52.055 [2024-11-04 16:15:10.629396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.839 ms 00:20:52.055 [2024-11-04 16:15:10.629408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.055 [2024-11-04 16:15:10.634797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.055 [2024-11-04 16:15:10.634830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:52.055 [2024-11-04 16:15:10.634845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.366 ms 00:20:52.055 [2024-11-04 16:15:10.634856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.055 [2024-11-04 16:15:10.670477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.055 [2024-11-04 16:15:10.670527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:52.055 [2024-11-04 16:15:10.670560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.608 ms 00:20:52.055 [2024-11-04 16:15:10.670572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.055 [2024-11-04 16:15:10.691207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.055 [2024-11-04 16:15:10.691255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:52.055 [2024-11-04 16:15:10.691270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.598 ms 00:20:52.055 [2024-11-04 16:15:10.691303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.055 [2024-11-04 16:15:10.691437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.055 [2024-11-04 16:15:10.691452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:52.055 [2024-11-04 16:15:10.691464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:52.055 [2024-11-04 16:15:10.691476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.055 [2024-11-04 16:15:10.726289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.055 [2024-11-04 16:15:10.726331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:52.055 [2024-11-04 16:15:10.726346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.835 ms 00:20:52.055 [2024-11-04 16:15:10.726356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.055 [2024-11-04 16:15:10.761471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.055 [2024-11-04 16:15:10.761511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:52.055 [2024-11-04 16:15:10.761526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.112 ms 00:20:52.055 [2024-11-04 16:15:10.761536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.324 [2024-11-04 16:15:10.795208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.324 [2024-11-04 16:15:10.795382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:52.324 [2024-11-04 16:15:10.795405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.653 ms 00:20:52.324 [2024-11-04 16:15:10.795417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.324 [2024-11-04 16:15:10.829352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.324 [2024-11-04 16:15:10.829538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:52.324 [2024-11-04 16:15:10.829561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.896 ms 00:20:52.324 [2024-11-04 16:15:10.829573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.324 [2024-11-04 16:15:10.829632] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:52.324 [2024-11-04 16:15:10.829649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:52.324 [2024-11-04 16:15:10.829941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.829953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.829965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.829977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.829989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:52.325 [2024-11-04 16:15:10.830974] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:52.325 [2024-11-04 16:15:10.830985] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84bc5547-bd10-4723-8e79-2ff33cc227b9 00:20:52.325 [2024-11-04 16:15:10.830997] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:52.325 [2024-11-04 16:15:10.831008] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:52.325 [2024-11-04 16:15:10.831018] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:52.325 [2024-11-04 16:15:10.831030] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:52.325 [2024-11-04 16:15:10.831041] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:52.325 [2024-11-04 16:15:10.831052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:52.325 [2024-11-04 16:15:10.831062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:52.325 [2024-11-04 16:15:10.831072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:52.325 [2024-11-04 16:15:10.831082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:52.325 [2024-11-04 16:15:10.831093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.325 [2024-11-04 16:15:10.831112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:52.325 [2024-11-04 16:15:10.831124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.465 ms 00:20:52.325 [2024-11-04 16:15:10.831135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.325 [2024-11-04 16:15:10.850424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.325 [2024-11-04 16:15:10.850459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:52.326 [2024-11-04 16:15:10.850473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.296 ms 00:20:52.326 [2024-11-04 16:15:10.850483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.326 [2024-11-04 16:15:10.851051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.326 [2024-11-04 16:15:10.851070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:52.326 [2024-11-04 16:15:10.851083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:20:52.326 [2024-11-04 16:15:10.851094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.326 [2024-11-04 16:15:10.903773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.326 [2024-11-04 16:15:10.903812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.326 [2024-11-04 16:15:10.903828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.326 [2024-11-04 16:15:10.903840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.326 [2024-11-04 16:15:10.903957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.326 [2024-11-04 16:15:10.903971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.326 [2024-11-04 16:15:10.903983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.326 [2024-11-04 16:15:10.903995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.326 [2024-11-04 16:15:10.904051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.326 [2024-11-04 16:15:10.904065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.326 [2024-11-04 16:15:10.904077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.326 [2024-11-04 16:15:10.904089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.326 [2024-11-04 16:15:10.904111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.326 [2024-11-04 16:15:10.904141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.326 [2024-11-04 16:15:10.904152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.326 [2024-11-04 16:15:10.904163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.326 [2024-11-04 16:15:11.021765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.326 [2024-11-04 16:15:11.021821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.326 [2024-11-04 16:15:11.021838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.326 [2024-11-04 16:15:11.021849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.585 [2024-11-04 16:15:11.117704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.585 [2024-11-04 16:15:11.117783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.585 [2024-11-04 16:15:11.117799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.585 [2024-11-04 16:15:11.117827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.585 [2024-11-04 16:15:11.117893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.585 [2024-11-04 16:15:11.117906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.585 [2024-11-04 16:15:11.117935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.585 [2024-11-04 16:15:11.117948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.585 [2024-11-04 16:15:11.117980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.585 [2024-11-04 16:15:11.117993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.585 [2024-11-04 16:15:11.118012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.585 [2024-11-04 16:15:11.118024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.585 [2024-11-04 16:15:11.118150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.585 [2024-11-04 16:15:11.118165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.585 [2024-11-04 16:15:11.118178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.585 [2024-11-04 16:15:11.118190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.585 [2024-11-04 16:15:11.118232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.585 [2024-11-04 16:15:11.118247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:52.585 [2024-11-04 16:15:11.118259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.585 [2024-11-04 16:15:11.118276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.585 [2024-11-04 16:15:11.118318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.585 [2024-11-04 16:15:11.118330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.585 [2024-11-04 16:15:11.118342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.585 [2024-11-04 16:15:11.118354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.585 [2024-11-04 16:15:11.118400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.585 [2024-11-04 16:15:11.118413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.585 [2024-11-04 16:15:11.118431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.585 [2024-11-04 16:15:11.118442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.585 [2024-11-04 16:15:11.118602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 497.675 ms, result 0 00:20:53.521 00:20:53.521 00:20:53.521 16:15:12 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:53.521 16:15:12 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:54.090 16:15:12 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:54.090 [2024-11-04 16:15:12.647507] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:20:54.090 [2024-11-04 16:15:12.647618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75942 ] 00:20:54.349 [2024-11-04 16:15:12.826694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.349 [2024-11-04 16:15:12.940597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.608 [2024-11-04 16:15:13.279651] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.608 [2024-11-04 16:15:13.279726] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.868 [2024-11-04 16:15:13.442435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.868 [2024-11-04 16:15:13.442488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:54.868 [2024-11-04 16:15:13.442504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:54.868 [2024-11-04 16:15:13.442524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.868 [2024-11-04 16:15:13.445656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.868 [2024-11-04 16:15:13.445700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.868 [2024-11-04 16:15:13.445714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.112 ms 00:20:54.868 [2024-11-04 16:15:13.445725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.868 [2024-11-04 16:15:13.445854] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:54.868 [2024-11-04 16:15:13.446862] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:54.868 [2024-11-04 16:15:13.446901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.868 [2024-11-04 16:15:13.446915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.868 [2024-11-04 16:15:13.446928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:20:54.868 [2024-11-04 16:15:13.446939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.868 [2024-11-04 16:15:13.448618] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:54.868 [2024-11-04 16:15:13.466433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.868 [2024-11-04 16:15:13.466480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:54.868 [2024-11-04 16:15:13.466496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.845 ms 00:20:54.868 [2024-11-04 16:15:13.466508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.868 [2024-11-04 16:15:13.466621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.868 [2024-11-04 16:15:13.466638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:54.868 [2024-11-04 16:15:13.466650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:54.868 [2024-11-04 16:15:13.466661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.868 [2024-11-04 16:15:13.473618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.868 [2024-11-04 16:15:13.473809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.868 [2024-11-04 16:15:13.473834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.924 ms 00:20:54.868 [2024-11-04 16:15:13.473846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.868 [2024-11-04 16:15:13.473962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.868 [2024-11-04 16:15:13.473978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.868 [2024-11-04 16:15:13.473991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:54.868 [2024-11-04 16:15:13.474003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.868 [2024-11-04 16:15:13.474035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.868 [2024-11-04 16:15:13.474052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:54.868 [2024-11-04 16:15:13.474065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:54.868 [2024-11-04 16:15:13.474076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.868 [2024-11-04 16:15:13.474102] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:54.868 [2024-11-04 16:15:13.479071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.868 [2024-11-04 16:15:13.479108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.868 [2024-11-04 16:15:13.479122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.983 ms 00:20:54.868 [2024-11-04 16:15:13.479134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.868 [2024-11-04 16:15:13.479207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.868 [2024-11-04 16:15:13.479220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:54.868 [2024-11-04 16:15:13.479233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:54.868 [2024-11-04 16:15:13.479244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.868 [2024-11-04 16:15:13.479271] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:54.868 [2024-11-04 16:15:13.479300] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:54.868 [2024-11-04 16:15:13.479338] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:54.868 [2024-11-04 16:15:13.479357] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:54.868 [2024-11-04 16:15:13.479445] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:54.868 [2024-11-04 16:15:13.479461] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:54.868 [2024-11-04 16:15:13.479475] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:54.868 [2024-11-04 16:15:13.479490] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:54.868 [2024-11-04 16:15:13.479508] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:54.868 [2024-11-04 16:15:13.479521] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:54.868 [2024-11-04 16:15:13.479533] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:54.868 [2024-11-04 16:15:13.479544] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:54.868 [2024-11-04 16:15:13.479556] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:54.868 [2024-11-04 16:15:13.479567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.868 [2024-11-04 16:15:13.479579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:54.868 [2024-11-04 16:15:13.479592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:20:54.868 [2024-11-04 16:15:13.479603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.868 [2024-11-04 16:15:13.479688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.868 [2024-11-04 16:15:13.479701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:54.868 [2024-11-04 16:15:13.479715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:54.868 [2024-11-04 16:15:13.479727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.868 [2024-11-04 16:15:13.479846] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:54.868 [2024-11-04 16:15:13.479863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:54.868 [2024-11-04 16:15:13.479875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.868 [2024-11-04 16:15:13.479887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.868 [2024-11-04 16:15:13.479899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:54.868 [2024-11-04 16:15:13.479911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:54.868 [2024-11-04 16:15:13.479922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:54.868 [2024-11-04 16:15:13.479933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:54.868 [2024-11-04 16:15:13.479945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:54.868 [2024-11-04 16:15:13.479956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.868 [2024-11-04 16:15:13.479966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:54.868 [2024-11-04 16:15:13.479977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:54.868 [2024-11-04 16:15:13.479988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.868 [2024-11-04 16:15:13.480013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:54.868 [2024-11-04 16:15:13.480025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:54.868 [2024-11-04 16:15:13.480040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.869 [2024-11-04 16:15:13.480051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:54.869 [2024-11-04 16:15:13.480062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:54.869 [2024-11-04 16:15:13.480072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.869 [2024-11-04 16:15:13.480083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:54.869 [2024-11-04 16:15:13.480094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:54.869 [2024-11-04 16:15:13.480105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.869 [2024-11-04 16:15:13.480115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:54.869 [2024-11-04 16:15:13.480126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:54.869 [2024-11-04 16:15:13.480136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.869 [2024-11-04 16:15:13.480146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:54.869 [2024-11-04 16:15:13.480157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:54.869 [2024-11-04 16:15:13.480168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.869 [2024-11-04 16:15:13.480178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:54.869 [2024-11-04 16:15:13.480189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:54.869 [2024-11-04 16:15:13.480200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.869 [2024-11-04 16:15:13.480210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:54.869 [2024-11-04 16:15:13.480221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:54.869 [2024-11-04 16:15:13.480231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.869 [2024-11-04 16:15:13.480241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:54.869 [2024-11-04 16:15:13.480251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:54.869 [2024-11-04 16:15:13.480278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.869 [2024-11-04 16:15:13.480289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:54.869 [2024-11-04 16:15:13.480300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:54.869 [2024-11-04 16:15:13.480311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.869 [2024-11-04 16:15:13.480321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:54.869 [2024-11-04 16:15:13.480332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:54.869 [2024-11-04 16:15:13.480343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.869 [2024-11-04 16:15:13.480353] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:54.869 [2024-11-04 16:15:13.480365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:54.869 [2024-11-04 16:15:13.480377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.869 [2024-11-04 16:15:13.480394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.869 [2024-11-04 16:15:13.480406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:54.869 [2024-11-04 16:15:13.480418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:54.869 [2024-11-04 16:15:13.480429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:54.869 [2024-11-04 16:15:13.480440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:54.869 [2024-11-04 16:15:13.480450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:54.869 [2024-11-04 16:15:13.480462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:54.869 [2024-11-04 16:15:13.480475] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:54.869 [2024-11-04 16:15:13.480488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.869 [2024-11-04 16:15:13.480501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:54.869 [2024-11-04 16:15:13.480513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:54.869 [2024-11-04 16:15:13.480525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:54.869 [2024-11-04 16:15:13.480538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:54.869 [2024-11-04 16:15:13.480550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:54.869 [2024-11-04 16:15:13.480562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:54.869 [2024-11-04 16:15:13.480574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:54.869 [2024-11-04 16:15:13.480586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:54.869 [2024-11-04 16:15:13.480598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:54.869 [2024-11-04 16:15:13.480610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:54.869 [2024-11-04 16:15:13.480622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:54.869 [2024-11-04 16:15:13.480634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:54.869 [2024-11-04 16:15:13.480645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:54.869 [2024-11-04 16:15:13.480657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:54.869 [2024-11-04 16:15:13.480669] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:54.869 [2024-11-04 16:15:13.480682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.869 [2024-11-04 16:15:13.480694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:54.869 [2024-11-04 16:15:13.480706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:54.869 [2024-11-04 16:15:13.480718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:54.869 [2024-11-04 16:15:13.480730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:54.869 [2024-11-04 16:15:13.480742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.869 [2024-11-04 16:15:13.480754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:54.869 [2024-11-04 16:15:13.480771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:20:54.869 [2024-11-04 16:15:13.480795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.869 [2024-11-04 16:15:13.518266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.869 [2024-11-04 16:15:13.518302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.869 [2024-11-04 16:15:13.518318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.472 ms 00:20:54.869 [2024-11-04 16:15:13.518329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.869 [2024-11-04 16:15:13.518444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.869 [2024-11-04 16:15:13.518463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:54.869 [2024-11-04 16:15:13.518475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:54.869 [2024-11-04 16:15:13.518487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.869 [2024-11-04 16:15:13.582867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.869 [2024-11-04 16:15:13.582906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.869 [2024-11-04 16:15:13.582922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.458 ms 00:20:54.869 [2024-11-04 16:15:13.582938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.869 [2024-11-04 16:15:13.583042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.869 [2024-11-04 16:15:13.583056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.869 [2024-11-04 16:15:13.583069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:54.869 [2024-11-04 16:15:13.583081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.869 [2024-11-04 16:15:13.583529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.869 [2024-11-04 16:15:13.583543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.869 [2024-11-04 16:15:13.583555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:20:54.869 [2024-11-04 16:15:13.583570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.869 [2024-11-04 16:15:13.583686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.869 [2024-11-04 16:15:13.583702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.869 [2024-11-04 16:15:13.583714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:20:54.869 [2024-11-04 16:15:13.583725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.603452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.603489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.129 [2024-11-04 16:15:13.603503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.713 ms 00:20:55.129 [2024-11-04 16:15:13.603515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.621549] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:55.129 [2024-11-04 16:15:13.621589] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:55.129 [2024-11-04 16:15:13.621605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.621617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:55.129 [2024-11-04 16:15:13.621630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.999 ms 00:20:55.129 [2024-11-04 16:15:13.621641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.649821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.649879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:55.129 [2024-11-04 16:15:13.649894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.138 ms 00:20:55.129 [2024-11-04 16:15:13.649906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.666902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.666940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:55.129 [2024-11-04 16:15:13.666955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.935 ms 00:20:55.129 [2024-11-04 16:15:13.666966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.683983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.684019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:55.129 [2024-11-04 16:15:13.684033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.962 ms 00:20:55.129 [2024-11-04 16:15:13.684061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.684830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.684873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:55.129 [2024-11-04 16:15:13.684887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:20:55.129 [2024-11-04 16:15:13.684899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.767054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.767116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:55.129 [2024-11-04 16:15:13.767134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.254 ms 00:20:55.129 [2024-11-04 16:15:13.767146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.777570] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:55.129 [2024-11-04 16:15:13.793288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.793334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:55.129 [2024-11-04 16:15:13.793351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.077 ms 00:20:55.129 [2024-11-04 16:15:13.793363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.793485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.793499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:55.129 [2024-11-04 16:15:13.793512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:55.129 [2024-11-04 16:15:13.793523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.793579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.793591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:55.129 [2024-11-04 16:15:13.793603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:55.129 [2024-11-04 16:15:13.793614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.793644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.793660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:55.129 [2024-11-04 16:15:13.793671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.129 [2024-11-04 16:15:13.793682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.793723] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:55.129 [2024-11-04 16:15:13.793736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.793767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:55.129 [2024-11-04 16:15:13.793780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:55.129 [2024-11-04 16:15:13.793808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.828514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.828559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:55.129 [2024-11-04 16:15:13.828575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.733 ms 00:20:55.129 [2024-11-04 16:15:13.828586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.828709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.129 [2024-11-04 16:15:13.828724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:55.129 [2024-11-04 16:15:13.828737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:55.129 [2024-11-04 16:15:13.828773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.129 [2024-11-04 16:15:13.829813] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:55.129 [2024-11-04 16:15:13.834113] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 387.679 ms, result 0 00:20:55.129 [2024-11-04 16:15:13.835068] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:55.388 [2024-11-04 16:15:13.852530] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:55.388  [2024-11-04T16:15:14.110Z] Copying: 4096/4096 [kB] (average 23 MBps)[2024-11-04 16:15:14.029687] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:55.388 [2024-11-04 16:15:14.043410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.388 [2024-11-04 16:15:14.043459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:55.388 [2024-11-04 16:15:14.043475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:55.388 [2024-11-04 16:15:14.043495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.389 [2024-11-04 16:15:14.043521] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:55.389 [2024-11-04 16:15:14.047701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.389 [2024-11-04 16:15:14.047889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:55.389 [2024-11-04 16:15:14.047915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.169 ms 00:20:55.389 [2024-11-04 16:15:14.047927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.389 [2024-11-04 16:15:14.049833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.389 [2024-11-04 16:15:14.049863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:55.389 [2024-11-04 16:15:14.049878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.874 ms 00:20:55.389 [2024-11-04 16:15:14.049890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.389 [2024-11-04 16:15:14.053157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.389 [2024-11-04 16:15:14.053291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:55.389 [2024-11-04 16:15:14.053378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:20:55.389 [2024-11-04 16:15:14.053423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.389 [2024-11-04 16:15:14.059042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.389 [2024-11-04 16:15:14.059197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:55.389 [2024-11-04 16:15:14.059278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.542 ms 00:20:55.389 [2024-11-04 16:15:14.059322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.389 [2024-11-04 16:15:14.094508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.389 [2024-11-04 16:15:14.094672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:55.389 [2024-11-04 16:15:14.094829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.158 ms 00:20:55.389 [2024-11-04 16:15:14.094874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.649 [2024-11-04 16:15:14.115143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.649 [2024-11-04 16:15:14.115328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:55.649 [2024-11-04 16:15:14.115457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.188 ms 00:20:55.649 [2024-11-04 16:15:14.115501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.649 [2024-11-04 16:15:14.115660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.649 [2024-11-04 16:15:14.115789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:55.649 [2024-11-04 16:15:14.115848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:55.649 [2024-11-04 16:15:14.115887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.649 [2024-11-04 16:15:14.151326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.649 [2024-11-04 16:15:14.151472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:55.649 [2024-11-04 16:15:14.151552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.425 ms 00:20:55.649 [2024-11-04 16:15:14.151592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.649 [2024-11-04 16:15:14.185442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.649 [2024-11-04 16:15:14.185581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:55.649 [2024-11-04 16:15:14.185675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.797 ms 00:20:55.649 [2024-11-04 16:15:14.185714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.649 [2024-11-04 16:15:14.219133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.649 [2024-11-04 16:15:14.219284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:55.649 [2024-11-04 16:15:14.219405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.380 ms 00:20:55.649 [2024-11-04 16:15:14.219446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.649 [2024-11-04 16:15:14.253601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.649 [2024-11-04 16:15:14.253737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:55.649 [2024-11-04 16:15:14.253849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.101 ms 00:20:55.649 [2024-11-04 16:15:14.253890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.649 [2024-11-04 16:15:14.253987] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:55.649 [2024-11-04 16:15:14.254037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.254148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.254209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.254265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.254365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.254426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.254650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.254706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.254783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.254841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.254960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.255022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.255078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.255133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.255241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.255301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.255356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.255412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.255558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.255614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.255669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.255724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.255849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.255951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:55.649 [2024-11-04 16:15:14.256684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.256993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:55.650 [2024-11-04 16:15:14.257378] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:55.650 [2024-11-04 16:15:14.257390] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84bc5547-bd10-4723-8e79-2ff33cc227b9 00:20:55.650 [2024-11-04 16:15:14.257402] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:55.650 [2024-11-04 16:15:14.257414] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:55.650 [2024-11-04 16:15:14.257424] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:55.650 [2024-11-04 16:15:14.257437] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:55.650 [2024-11-04 16:15:14.257448] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:55.650 [2024-11-04 16:15:14.257460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:55.650 [2024-11-04 16:15:14.257471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:55.650 [2024-11-04 16:15:14.257482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:55.650 [2024-11-04 16:15:14.257492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:55.650 [2024-11-04 16:15:14.257504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.650 [2024-11-04 16:15:14.257521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:55.650 [2024-11-04 16:15:14.257534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.524 ms 00:20:55.650 [2024-11-04 16:15:14.257545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.650 [2024-11-04 16:15:14.276806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.650 [2024-11-04 16:15:14.276842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:55.650 [2024-11-04 16:15:14.276856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.263 ms 00:20:55.650 [2024-11-04 16:15:14.276884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.650 [2024-11-04 16:15:14.277416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.650 [2024-11-04 16:15:14.277432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:55.650 [2024-11-04 16:15:14.277445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:20:55.650 [2024-11-04 16:15:14.277456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.650 [2024-11-04 16:15:14.329147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.650 [2024-11-04 16:15:14.329307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.650 [2024-11-04 16:15:14.329346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.650 [2024-11-04 16:15:14.329359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.650 [2024-11-04 16:15:14.329460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.650 [2024-11-04 16:15:14.329474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:55.650 [2024-11-04 16:15:14.329486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.650 [2024-11-04 16:15:14.329498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.650 [2024-11-04 16:15:14.329555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.650 [2024-11-04 16:15:14.329570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:55.650 [2024-11-04 16:15:14.329583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.650 [2024-11-04 16:15:14.329595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.650 [2024-11-04 16:15:14.329617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.650 [2024-11-04 16:15:14.329635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:55.650 [2024-11-04 16:15:14.329648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.650 [2024-11-04 16:15:14.329660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.910 [2024-11-04 16:15:14.450725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.910 [2024-11-04 16:15:14.450791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.910 [2024-11-04 16:15:14.450808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.910 [2024-11-04 16:15:14.450819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.910 [2024-11-04 16:15:14.547249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.910 [2024-11-04 16:15:14.547297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.910 [2024-11-04 16:15:14.547312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.910 [2024-11-04 16:15:14.547324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.910 [2024-11-04 16:15:14.547394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.910 [2024-11-04 16:15:14.547407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.910 [2024-11-04 16:15:14.547419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.910 [2024-11-04 16:15:14.547430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.910 [2024-11-04 16:15:14.547461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.910 [2024-11-04 16:15:14.547473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.910 [2024-11-04 16:15:14.547492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.910 [2024-11-04 16:15:14.547503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.910 [2024-11-04 16:15:14.547617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.910 [2024-11-04 16:15:14.547632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.910 [2024-11-04 16:15:14.547644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.910 [2024-11-04 16:15:14.547656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.910 [2024-11-04 16:15:14.547699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.910 [2024-11-04 16:15:14.547713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:55.910 [2024-11-04 16:15:14.547724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.910 [2024-11-04 16:15:14.547740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.910 [2024-11-04 16:15:14.547818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.910 [2024-11-04 16:15:14.547831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.910 [2024-11-04 16:15:14.547844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.910 [2024-11-04 16:15:14.547855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.910 [2024-11-04 16:15:14.547900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:55.910 [2024-11-04 16:15:14.547927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.910 [2024-11-04 16:15:14.547962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:55.910 [2024-11-04 16:15:14.547973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.910 [2024-11-04 16:15:14.548120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 505.518 ms, result 0 00:20:56.847 00:20:56.847 00:20:56.847 16:15:15 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=75978 00:20:56.847 16:15:15 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:56.847 16:15:15 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 75978 00:20:56.847 16:15:15 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75978 ']' 00:20:56.847 16:15:15 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.847 16:15:15 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:56.847 16:15:15 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.847 16:15:15 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:56.847 16:15:15 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:57.106 [2024-11-04 16:15:15.667429] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:20:57.106 [2024-11-04 16:15:15.667549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75978 ] 00:20:57.365 [2024-11-04 16:15:15.849522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.365 [2024-11-04 16:15:15.954480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.301 16:15:16 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:58.301 16:15:16 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:20:58.301 16:15:16 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:58.301 [2024-11-04 16:15:17.004531] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:58.301 [2024-11-04 16:15:17.004598] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:58.561 [2024-11-04 16:15:17.191773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.561 [2024-11-04 16:15:17.191825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:58.561 [2024-11-04 16:15:17.191848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:58.561 [2024-11-04 16:15:17.191861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.561 [2024-11-04 16:15:17.195538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.561 [2024-11-04 16:15:17.195584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.561 [2024-11-04 16:15:17.195602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.659 ms 00:20:58.561 [2024-11-04 16:15:17.195614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.561 [2024-11-04 16:15:17.195730] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:58.561 [2024-11-04 16:15:17.196811] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:58.561 [2024-11-04 16:15:17.197063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.561 [2024-11-04 16:15:17.197082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.561 [2024-11-04 16:15:17.197099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:20:58.561 [2024-11-04 16:15:17.197111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.561 [2024-11-04 16:15:17.198678] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:58.561 [2024-11-04 16:15:17.216548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.561 [2024-11-04 16:15:17.216595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:58.561 [2024-11-04 16:15:17.216612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.904 ms 00:20:58.561 [2024-11-04 16:15:17.216626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.561 [2024-11-04 16:15:17.216725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.561 [2024-11-04 16:15:17.216743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:58.561 [2024-11-04 16:15:17.216771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:58.561 [2024-11-04 16:15:17.216786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.561 [2024-11-04 16:15:17.223724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.561 [2024-11-04 16:15:17.223945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.561 [2024-11-04 16:15:17.223971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.878 ms 00:20:58.561 [2024-11-04 16:15:17.223990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.561 [2024-11-04 16:15:17.224146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.561 [2024-11-04 16:15:17.224168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.561 [2024-11-04 16:15:17.224182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:58.561 [2024-11-04 16:15:17.224200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.561 [2024-11-04 16:15:17.224246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.561 [2024-11-04 16:15:17.224265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:58.561 [2024-11-04 16:15:17.224278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:58.561 [2024-11-04 16:15:17.224296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.561 [2024-11-04 16:15:17.224324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:58.561 [2024-11-04 16:15:17.229166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.561 [2024-11-04 16:15:17.229201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.561 [2024-11-04 16:15:17.229221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.849 ms 00:20:58.561 [2024-11-04 16:15:17.229232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.561 [2024-11-04 16:15:17.229314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.561 [2024-11-04 16:15:17.229328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:58.561 [2024-11-04 16:15:17.229347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:58.561 [2024-11-04 16:15:17.229364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.561 [2024-11-04 16:15:17.229394] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:58.561 [2024-11-04 16:15:17.229420] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:58.561 [2024-11-04 16:15:17.229479] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:58.561 [2024-11-04 16:15:17.229500] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:58.561 [2024-11-04 16:15:17.229590] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:58.561 [2024-11-04 16:15:17.229605] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:58.561 [2024-11-04 16:15:17.229627] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:58.561 [2024-11-04 16:15:17.229647] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:58.561 [2024-11-04 16:15:17.229666] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:58.561 [2024-11-04 16:15:17.229679] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:58.562 [2024-11-04 16:15:17.229696] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:58.562 [2024-11-04 16:15:17.229707] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:58.562 [2024-11-04 16:15:17.229729] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:58.562 [2024-11-04 16:15:17.229741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.562 [2024-11-04 16:15:17.229780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:58.562 [2024-11-04 16:15:17.229809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:20:58.562 [2024-11-04 16:15:17.229827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.562 [2024-11-04 16:15:17.229910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.562 [2024-11-04 16:15:17.229929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:58.562 [2024-11-04 16:15:17.229942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:58.562 [2024-11-04 16:15:17.229960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.562 [2024-11-04 16:15:17.230061] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:58.562 [2024-11-04 16:15:17.230085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:58.562 [2024-11-04 16:15:17.230098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.562 [2024-11-04 16:15:17.230116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.562 [2024-11-04 16:15:17.230128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:58.562 [2024-11-04 16:15:17.230145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:58.562 [2024-11-04 16:15:17.230156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:58.562 [2024-11-04 16:15:17.230180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:58.562 [2024-11-04 16:15:17.230192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:58.562 [2024-11-04 16:15:17.230208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.562 [2024-11-04 16:15:17.230220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:58.562 [2024-11-04 16:15:17.230252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:58.562 [2024-11-04 16:15:17.230264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.562 [2024-11-04 16:15:17.230281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:58.562 [2024-11-04 16:15:17.230293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:58.562 [2024-11-04 16:15:17.230310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.562 [2024-11-04 16:15:17.230323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:58.562 [2024-11-04 16:15:17.230340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:58.562 [2024-11-04 16:15:17.230352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.562 [2024-11-04 16:15:17.230369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:58.562 [2024-11-04 16:15:17.230393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:58.562 [2024-11-04 16:15:17.230411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.562 [2024-11-04 16:15:17.230423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:58.562 [2024-11-04 16:15:17.230445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:58.562 [2024-11-04 16:15:17.230457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.562 [2024-11-04 16:15:17.230474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:58.562 [2024-11-04 16:15:17.230486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:58.562 [2024-11-04 16:15:17.230503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.562 [2024-11-04 16:15:17.230523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:58.562 [2024-11-04 16:15:17.230541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:58.562 [2024-11-04 16:15:17.230552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.562 [2024-11-04 16:15:17.230571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:58.562 [2024-11-04 16:15:17.230583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:58.562 [2024-11-04 16:15:17.230600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.562 [2024-11-04 16:15:17.230612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:58.562 [2024-11-04 16:15:17.230629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:58.562 [2024-11-04 16:15:17.230641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.562 [2024-11-04 16:15:17.230657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:58.562 [2024-11-04 16:15:17.230669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:58.562 [2024-11-04 16:15:17.230691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.562 [2024-11-04 16:15:17.230703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:58.562 [2024-11-04 16:15:17.230720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:58.562 [2024-11-04 16:15:17.230732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.562 [2024-11-04 16:15:17.230759] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:58.562 [2024-11-04 16:15:17.230788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:58.562 [2024-11-04 16:15:17.230812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.562 [2024-11-04 16:15:17.230825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.562 [2024-11-04 16:15:17.230843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:58.562 [2024-11-04 16:15:17.230857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:58.562 [2024-11-04 16:15:17.230872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:58.562 [2024-11-04 16:15:17.230884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:58.562 [2024-11-04 16:15:17.230897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:58.562 [2024-11-04 16:15:17.230909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:58.562 [2024-11-04 16:15:17.230924] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:58.562 [2024-11-04 16:15:17.230939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.562 [2024-11-04 16:15:17.230960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:58.562 [2024-11-04 16:15:17.230973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:58.562 [2024-11-04 16:15:17.230988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:58.562 [2024-11-04 16:15:17.231001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:58.562 [2024-11-04 16:15:17.231016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:58.562 [2024-11-04 16:15:17.231028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:58.562 [2024-11-04 16:15:17.231044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:58.562 [2024-11-04 16:15:17.231056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:58.562 [2024-11-04 16:15:17.231070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:58.562 [2024-11-04 16:15:17.231083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:58.562 [2024-11-04 16:15:17.231098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:58.562 [2024-11-04 16:15:17.231110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:58.562 [2024-11-04 16:15:17.231125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:58.562 [2024-11-04 16:15:17.231137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:58.562 [2024-11-04 16:15:17.231152] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:58.562 [2024-11-04 16:15:17.231165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.562 [2024-11-04 16:15:17.231183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:58.562 [2024-11-04 16:15:17.231196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:58.563 [2024-11-04 16:15:17.231211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:58.563 [2024-11-04 16:15:17.231223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:58.563 [2024-11-04 16:15:17.231239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.563 [2024-11-04 16:15:17.231252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:58.563 [2024-11-04 16:15:17.231267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:20:58.563 [2024-11-04 16:15:17.231279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.563 [2024-11-04 16:15:17.269232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.563 [2024-11-04 16:15:17.269269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:58.563 [2024-11-04 16:15:17.269290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.942 ms 00:20:58.563 [2024-11-04 16:15:17.269302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.563 [2024-11-04 16:15:17.269429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.563 [2024-11-04 16:15:17.269443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:58.563 [2024-11-04 16:15:17.269461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:58.563 [2024-11-04 16:15:17.269472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.822 [2024-11-04 16:15:17.318230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.822 [2024-11-04 16:15:17.318268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:58.822 [2024-11-04 16:15:17.318290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.806 ms 00:20:58.822 [2024-11-04 16:15:17.318302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.822 [2024-11-04 16:15:17.318392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.822 [2024-11-04 16:15:17.318405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.822 [2024-11-04 16:15:17.318420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:58.822 [2024-11-04 16:15:17.318432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.822 [2024-11-04 16:15:17.318942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.822 [2024-11-04 16:15:17.318958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.822 [2024-11-04 16:15:17.318978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:20:58.822 [2024-11-04 16:15:17.318990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.822 [2024-11-04 16:15:17.319115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.822 [2024-11-04 16:15:17.319130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.822 [2024-11-04 16:15:17.319146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:58.822 [2024-11-04 16:15:17.319157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.822 [2024-11-04 16:15:17.340782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.822 [2024-11-04 16:15:17.340820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.822 [2024-11-04 16:15:17.340841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.624 ms 00:20:58.822 [2024-11-04 16:15:17.340853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.822 [2024-11-04 16:15:17.359349] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:58.822 [2024-11-04 16:15:17.359408] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:58.822 [2024-11-04 16:15:17.359430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.822 [2024-11-04 16:15:17.359442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:58.822 [2024-11-04 16:15:17.359458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.489 ms 00:20:58.822 [2024-11-04 16:15:17.359470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.822 [2024-11-04 16:15:17.387207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.822 [2024-11-04 16:15:17.387370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:58.822 [2024-11-04 16:15:17.387418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.690 ms 00:20:58.822 [2024-11-04 16:15:17.387430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.822 [2024-11-04 16:15:17.404599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.822 [2024-11-04 16:15:17.404640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:58.822 [2024-11-04 16:15:17.404660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.082 ms 00:20:58.822 [2024-11-04 16:15:17.404671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.822 [2024-11-04 16:15:17.422480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.822 [2024-11-04 16:15:17.422527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:58.822 [2024-11-04 16:15:17.422546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.729 ms 00:20:58.822 [2024-11-04 16:15:17.422558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.822 [2024-11-04 16:15:17.423367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.822 [2024-11-04 16:15:17.423405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:58.822 [2024-11-04 16:15:17.423423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.676 ms 00:20:58.822 [2024-11-04 16:15:17.423436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.822 [2024-11-04 16:15:17.522522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.822 [2024-11-04 16:15:17.522580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:58.822 [2024-11-04 16:15:17.522603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.199 ms 00:20:58.822 [2024-11-04 16:15:17.522616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.822 [2024-11-04 16:15:17.534765] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:59.081 [2024-11-04 16:15:17.551539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.082 [2024-11-04 16:15:17.551601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:59.082 [2024-11-04 16:15:17.551649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.819 ms 00:20:59.082 [2024-11-04 16:15:17.551665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.082 [2024-11-04 16:15:17.551799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.082 [2024-11-04 16:15:17.551819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:59.082 [2024-11-04 16:15:17.551833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:59.082 [2024-11-04 16:15:17.551848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.082 [2024-11-04 16:15:17.551939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.082 [2024-11-04 16:15:17.551957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:59.082 [2024-11-04 16:15:17.551970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:59.082 [2024-11-04 16:15:17.551985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.082 [2024-11-04 16:15:17.552018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.082 [2024-11-04 16:15:17.552034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:59.082 [2024-11-04 16:15:17.552046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:59.082 [2024-11-04 16:15:17.552061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.082 [2024-11-04 16:15:17.552104] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:59.082 [2024-11-04 16:15:17.552124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.082 [2024-11-04 16:15:17.552137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:59.082 [2024-11-04 16:15:17.552157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:59.082 [2024-11-04 16:15:17.552169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.082 [2024-11-04 16:15:17.588002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.082 [2024-11-04 16:15:17.588046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:59.082 [2024-11-04 16:15:17.588066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.852 ms 00:20:59.082 [2024-11-04 16:15:17.588078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.082 [2024-11-04 16:15:17.588200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.082 [2024-11-04 16:15:17.588214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:59.082 [2024-11-04 16:15:17.588229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:59.082 [2024-11-04 16:15:17.588244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.082 [2024-11-04 16:15:17.589414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:59.082 [2024-11-04 16:15:17.593635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 397.986 ms, result 0 00:20:59.082 [2024-11-04 16:15:17.594877] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:59.082 Some configs were skipped because the RPC state that can call them passed over. 00:20:59.082 16:15:17 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:59.341 [2024-11-04 16:15:17.833883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.341 [2024-11-04 16:15:17.834090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:59.341 [2024-11-04 16:15:17.834117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.689 ms 00:20:59.341 [2024-11-04 16:15:17.834133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.341 [2024-11-04 16:15:17.834182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.989 ms, result 0 00:20:59.341 true 00:20:59.341 16:15:17 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:59.341 [2024-11-04 16:15:18.029362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.341 [2024-11-04 16:15:18.029526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:59.341 [2024-11-04 16:15:18.029614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:20:59.341 [2024-11-04 16:15:18.029655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.341 [2024-11-04 16:15:18.029733] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.663 ms, result 0 00:20:59.341 true 00:20:59.600 16:15:18 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 75978 00:20:59.600 16:15:18 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75978 ']' 00:20:59.600 16:15:18 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75978 00:20:59.600 16:15:18 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:20:59.600 16:15:18 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:59.600 16:15:18 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75978 00:20:59.600 16:15:18 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:59.600 16:15:18 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:59.600 16:15:18 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75978' 00:20:59.600 killing process with pid 75978 00:20:59.600 16:15:18 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75978 00:20:59.600 16:15:18 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75978 00:21:00.537 [2024-11-04 16:15:19.179236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.537 [2024-11-04 16:15:19.179549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:00.537 [2024-11-04 16:15:19.179706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:00.537 [2024-11-04 16:15:19.179765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.538 [2024-11-04 16:15:19.179860] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:00.538 [2024-11-04 16:15:19.184164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.538 [2024-11-04 16:15:19.184206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:00.538 [2024-11-04 16:15:19.184226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.254 ms 00:21:00.538 [2024-11-04 16:15:19.184238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.538 [2024-11-04 16:15:19.184496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.538 [2024-11-04 16:15:19.184511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:00.538 [2024-11-04 16:15:19.184526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:21:00.538 [2024-11-04 16:15:19.184538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.538 [2024-11-04 16:15:19.187915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.538 [2024-11-04 16:15:19.187957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:00.538 [2024-11-04 16:15:19.187979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.354 ms 00:21:00.538 [2024-11-04 16:15:19.187992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.538 [2024-11-04 16:15:19.193426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.538 [2024-11-04 16:15:19.193571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:00.538 [2024-11-04 16:15:19.193616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.395 ms 00:21:00.538 [2024-11-04 16:15:19.193629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.538 [2024-11-04 16:15:19.207716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.538 [2024-11-04 16:15:19.207765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:00.538 [2024-11-04 16:15:19.207804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.036 ms 00:21:00.538 [2024-11-04 16:15:19.207826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.538 [2024-11-04 16:15:19.218150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.538 [2024-11-04 16:15:19.218189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:00.538 [2024-11-04 16:15:19.218227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.258 ms 00:21:00.538 [2024-11-04 16:15:19.218239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.538 [2024-11-04 16:15:19.218377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.538 [2024-11-04 16:15:19.218392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:00.538 [2024-11-04 16:15:19.218408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:00.538 [2024-11-04 16:15:19.218420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.538 [2024-11-04 16:15:19.233023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.538 [2024-11-04 16:15:19.233164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:00.538 [2024-11-04 16:15:19.233209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.598 ms 00:21:00.538 [2024-11-04 16:15:19.233220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.538 [2024-11-04 16:15:19.247534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.538 [2024-11-04 16:15:19.247706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:00.538 [2024-11-04 16:15:19.247739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.272 ms 00:21:00.538 [2024-11-04 16:15:19.247764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.797 [2024-11-04 16:15:19.261909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.797 [2024-11-04 16:15:19.262049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:00.797 [2024-11-04 16:15:19.262097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.056 ms 00:21:00.797 [2024-11-04 16:15:19.262109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.797 [2024-11-04 16:15:19.275802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.797 [2024-11-04 16:15:19.275962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:00.797 [2024-11-04 16:15:19.276008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.623 ms 00:21:00.797 [2024-11-04 16:15:19.276020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.797 [2024-11-04 16:15:19.276109] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:00.797 [2024-11-04 16:15:19.276131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:00.797 [2024-11-04 16:15:19.276784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.276994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:00.798 [2024-11-04 16:15:19.277617] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:00.798 [2024-11-04 16:15:19.277639] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84bc5547-bd10-4723-8e79-2ff33cc227b9 00:21:00.798 [2024-11-04 16:15:19.277662] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:00.798 [2024-11-04 16:15:19.277681] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:00.798 [2024-11-04 16:15:19.277694] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:00.798 [2024-11-04 16:15:19.277709] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:00.798 [2024-11-04 16:15:19.277721] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:00.798 [2024-11-04 16:15:19.277736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:00.798 [2024-11-04 16:15:19.277758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:00.798 [2024-11-04 16:15:19.277772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:00.798 [2024-11-04 16:15:19.277784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:00.798 [2024-11-04 16:15:19.277798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.798 [2024-11-04 16:15:19.277811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:00.798 [2024-11-04 16:15:19.277827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.694 ms 00:21:00.798 [2024-11-04 16:15:19.277839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.798 [2024-11-04 16:15:19.296761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.798 [2024-11-04 16:15:19.296794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:00.798 [2024-11-04 16:15:19.296814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.903 ms 00:21:00.798 [2024-11-04 16:15:19.296842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.798 [2024-11-04 16:15:19.297455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.798 [2024-11-04 16:15:19.297481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:00.798 [2024-11-04 16:15:19.297498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:21:00.798 [2024-11-04 16:15:19.297514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.798 [2024-11-04 16:15:19.361978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.798 [2024-11-04 16:15:19.362018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.798 [2024-11-04 16:15:19.362039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.798 [2024-11-04 16:15:19.362067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.798 [2024-11-04 16:15:19.362160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.798 [2024-11-04 16:15:19.362174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.798 [2024-11-04 16:15:19.362193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.798 [2024-11-04 16:15:19.362211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.798 [2024-11-04 16:15:19.362271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.798 [2024-11-04 16:15:19.362285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.798 [2024-11-04 16:15:19.362303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.798 [2024-11-04 16:15:19.362315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.798 [2024-11-04 16:15:19.362340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.798 [2024-11-04 16:15:19.362352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.799 [2024-11-04 16:15:19.362367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.799 [2024-11-04 16:15:19.362379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.799 [2024-11-04 16:15:19.480198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.799 [2024-11-04 16:15:19.480253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.799 [2024-11-04 16:15:19.480293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.799 [2024-11-04 16:15:19.480306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.059 [2024-11-04 16:15:19.576627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.059 [2024-11-04 16:15:19.576678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:01.059 [2024-11-04 16:15:19.576698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.059 [2024-11-04 16:15:19.576715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.059 [2024-11-04 16:15:19.576819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.059 [2024-11-04 16:15:19.576834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:01.059 [2024-11-04 16:15:19.576853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.059 [2024-11-04 16:15:19.576866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.059 [2024-11-04 16:15:19.576901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.059 [2024-11-04 16:15:19.576914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:01.059 [2024-11-04 16:15:19.576929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.059 [2024-11-04 16:15:19.576941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.059 [2024-11-04 16:15:19.577065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.059 [2024-11-04 16:15:19.577080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:01.059 [2024-11-04 16:15:19.577096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.059 [2024-11-04 16:15:19.577108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.059 [2024-11-04 16:15:19.577158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.059 [2024-11-04 16:15:19.577172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:01.059 [2024-11-04 16:15:19.577188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.059 [2024-11-04 16:15:19.577200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.059 [2024-11-04 16:15:19.577247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.059 [2024-11-04 16:15:19.577263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:01.059 [2024-11-04 16:15:19.577280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.059 [2024-11-04 16:15:19.577292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.059 [2024-11-04 16:15:19.577344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.059 [2024-11-04 16:15:19.577364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:01.059 [2024-11-04 16:15:19.577380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.059 [2024-11-04 16:15:19.577392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.059 [2024-11-04 16:15:19.577547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 398.930 ms, result 0 00:21:02.004 16:15:20 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:02.004 [2024-11-04 16:15:20.637826] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:21:02.004 [2024-11-04 16:15:20.638290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76036 ] 00:21:02.275 [2024-11-04 16:15:20.817051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.275 [2024-11-04 16:15:20.933696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.844 [2024-11-04 16:15:21.270847] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:02.844 [2024-11-04 16:15:21.270916] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:02.844 [2024-11-04 16:15:21.432968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.844 [2024-11-04 16:15:21.433019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:02.844 [2024-11-04 16:15:21.433035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:02.844 [2024-11-04 16:15:21.433047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.844 [2024-11-04 16:15:21.436058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.844 [2024-11-04 16:15:21.436099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:02.844 [2024-11-04 16:15:21.436112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.992 ms 00:21:02.844 [2024-11-04 16:15:21.436123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.844 [2024-11-04 16:15:21.436242] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:02.844 [2024-11-04 16:15:21.437306] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:02.844 [2024-11-04 16:15:21.437344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.844 [2024-11-04 16:15:21.437358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:02.844 [2024-11-04 16:15:21.437370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:21:02.844 [2024-11-04 16:15:21.437382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.844 [2024-11-04 16:15:21.438970] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:02.844 [2024-11-04 16:15:21.456928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.844 [2024-11-04 16:15:21.457132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:02.844 [2024-11-04 16:15:21.457157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.988 ms 00:21:02.844 [2024-11-04 16:15:21.457170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.844 [2024-11-04 16:15:21.457276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.844 [2024-11-04 16:15:21.457292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:02.844 [2024-11-04 16:15:21.457306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:02.844 [2024-11-04 16:15:21.457317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.844 [2024-11-04 16:15:21.464282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.844 [2024-11-04 16:15:21.464447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:02.844 [2024-11-04 16:15:21.464487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.928 ms 00:21:02.844 [2024-11-04 16:15:21.464499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.844 [2024-11-04 16:15:21.464613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.844 [2024-11-04 16:15:21.464629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:02.844 [2024-11-04 16:15:21.464642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:02.844 [2024-11-04 16:15:21.464654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.844 [2024-11-04 16:15:21.464685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.844 [2024-11-04 16:15:21.464702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:02.844 [2024-11-04 16:15:21.464714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:02.844 [2024-11-04 16:15:21.464727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.844 [2024-11-04 16:15:21.464754] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:02.844 [2024-11-04 16:15:21.469435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.844 [2024-11-04 16:15:21.469469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:02.844 [2024-11-04 16:15:21.469482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.695 ms 00:21:02.844 [2024-11-04 16:15:21.469510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.844 [2024-11-04 16:15:21.469586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.844 [2024-11-04 16:15:21.469600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:02.844 [2024-11-04 16:15:21.469612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:02.844 [2024-11-04 16:15:21.469624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.844 [2024-11-04 16:15:21.469650] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:02.844 [2024-11-04 16:15:21.469678] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:02.844 [2024-11-04 16:15:21.469715] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:02.844 [2024-11-04 16:15:21.469733] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:02.844 [2024-11-04 16:15:21.469834] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:02.844 [2024-11-04 16:15:21.469850] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:02.844 [2024-11-04 16:15:21.469864] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:02.844 [2024-11-04 16:15:21.469879] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:02.844 [2024-11-04 16:15:21.469897] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:02.844 [2024-11-04 16:15:21.469910] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:02.844 [2024-11-04 16:15:21.469921] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:02.844 [2024-11-04 16:15:21.469933] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:02.844 [2024-11-04 16:15:21.469944] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:02.844 [2024-11-04 16:15:21.469957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.844 [2024-11-04 16:15:21.469968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:02.844 [2024-11-04 16:15:21.469980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:21:02.844 [2024-11-04 16:15:21.470002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.844 [2024-11-04 16:15:21.470077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.844 [2024-11-04 16:15:21.470089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:02.844 [2024-11-04 16:15:21.470104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:02.844 [2024-11-04 16:15:21.470115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.844 [2024-11-04 16:15:21.470199] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:02.844 [2024-11-04 16:15:21.470212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:02.844 [2024-11-04 16:15:21.470224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.844 [2024-11-04 16:15:21.470235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.844 [2024-11-04 16:15:21.470246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:02.844 [2024-11-04 16:15:21.470257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:02.844 [2024-11-04 16:15:21.470267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:02.844 [2024-11-04 16:15:21.470278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:02.844 [2024-11-04 16:15:21.470289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:02.844 [2024-11-04 16:15:21.470299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.844 [2024-11-04 16:15:21.470309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:02.844 [2024-11-04 16:15:21.470319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:02.844 [2024-11-04 16:15:21.470333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:02.844 [2024-11-04 16:15:21.470356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:02.844 [2024-11-04 16:15:21.470367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:02.844 [2024-11-04 16:15:21.470378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.844 [2024-11-04 16:15:21.470388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:02.844 [2024-11-04 16:15:21.470398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:02.844 [2024-11-04 16:15:21.470409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.844 [2024-11-04 16:15:21.470420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:02.844 [2024-11-04 16:15:21.470430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:02.844 [2024-11-04 16:15:21.470440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.844 [2024-11-04 16:15:21.470450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:02.844 [2024-11-04 16:15:21.470460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:02.845 [2024-11-04 16:15:21.470470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.845 [2024-11-04 16:15:21.470481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:02.845 [2024-11-04 16:15:21.470491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:02.845 [2024-11-04 16:15:21.470500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.845 [2024-11-04 16:15:21.470510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:02.845 [2024-11-04 16:15:21.470530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:02.845 [2024-11-04 16:15:21.470540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:02.845 [2024-11-04 16:15:21.470568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:02.845 [2024-11-04 16:15:21.470579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:02.845 [2024-11-04 16:15:21.470590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.845 [2024-11-04 16:15:21.470600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:02.845 [2024-11-04 16:15:21.470611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:02.845 [2024-11-04 16:15:21.470622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:02.845 [2024-11-04 16:15:21.470632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:02.845 [2024-11-04 16:15:21.470643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:02.845 [2024-11-04 16:15:21.470653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.845 [2024-11-04 16:15:21.470664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:02.845 [2024-11-04 16:15:21.470675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:02.845 [2024-11-04 16:15:21.470686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.845 [2024-11-04 16:15:21.470697] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:02.845 [2024-11-04 16:15:21.470711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:02.845 [2024-11-04 16:15:21.470723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:02.845 [2024-11-04 16:15:21.470739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:02.845 [2024-11-04 16:15:21.470750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:02.845 [2024-11-04 16:15:21.470773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:02.845 [2024-11-04 16:15:21.470785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:02.845 [2024-11-04 16:15:21.470796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:02.845 [2024-11-04 16:15:21.470807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:02.845 [2024-11-04 16:15:21.470817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:02.845 [2024-11-04 16:15:21.470830] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:02.845 [2024-11-04 16:15:21.470844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.845 [2024-11-04 16:15:21.470858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:02.845 [2024-11-04 16:15:21.470870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:02.845 [2024-11-04 16:15:21.470882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:02.845 [2024-11-04 16:15:21.470894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:02.845 [2024-11-04 16:15:21.470906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:02.845 [2024-11-04 16:15:21.470918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:02.845 [2024-11-04 16:15:21.470930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:02.845 [2024-11-04 16:15:21.470942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:02.845 [2024-11-04 16:15:21.470953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:02.845 [2024-11-04 16:15:21.470965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:02.845 [2024-11-04 16:15:21.470977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:02.845 [2024-11-04 16:15:21.470989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:02.845 [2024-11-04 16:15:21.471001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:02.845 [2024-11-04 16:15:21.471013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:02.845 [2024-11-04 16:15:21.471024] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:02.845 [2024-11-04 16:15:21.471037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:02.845 [2024-11-04 16:15:21.471051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:02.845 [2024-11-04 16:15:21.471063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:02.845 [2024-11-04 16:15:21.471074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:02.845 [2024-11-04 16:15:21.471085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:02.845 [2024-11-04 16:15:21.471098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.845 [2024-11-04 16:15:21.471110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:02.845 [2024-11-04 16:15:21.471126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:21:02.845 [2024-11-04 16:15:21.471137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.845 [2024-11-04 16:15:21.509225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.845 [2024-11-04 16:15:21.509418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.845 [2024-11-04 16:15:21.509506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.090 ms 00:21:02.845 [2024-11-04 16:15:21.509549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.845 [2024-11-04 16:15:21.509699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.845 [2024-11-04 16:15:21.509835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:02.845 [2024-11-04 16:15:21.509879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:02.845 [2024-11-04 16:15:21.509916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.585504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.585685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:03.105 [2024-11-04 16:15:21.585810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.609 ms 00:21:03.105 [2024-11-04 16:15:21.585863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.586004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.586148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:03.105 [2024-11-04 16:15:21.586240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:03.105 [2024-11-04 16:15:21.586275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.586821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.586949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:03.105 [2024-11-04 16:15:21.587030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:21:03.105 [2024-11-04 16:15:21.587077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.587233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.587275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:03.105 [2024-11-04 16:15:21.587377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:03.105 [2024-11-04 16:15:21.587419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.606461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.606615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:03.105 [2024-11-04 16:15:21.606732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.016 ms 00:21:03.105 [2024-11-04 16:15:21.606795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.625936] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:03.105 [2024-11-04 16:15:21.626103] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:03.105 [2024-11-04 16:15:21.626233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.626272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:03.105 [2024-11-04 16:15:21.626308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.310 ms 00:21:03.105 [2024-11-04 16:15:21.626343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.656482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.656662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:03.105 [2024-11-04 16:15:21.656743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.072 ms 00:21:03.105 [2024-11-04 16:15:21.656806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.674744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.674924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:03.105 [2024-11-04 16:15:21.675008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.852 ms 00:21:03.105 [2024-11-04 16:15:21.675027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.692176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.692214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:03.105 [2024-11-04 16:15:21.692229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.021 ms 00:21:03.105 [2024-11-04 16:15:21.692239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.693013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.693058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:03.105 [2024-11-04 16:15:21.693072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:21:03.105 [2024-11-04 16:15:21.693085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.773648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.773717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:03.105 [2024-11-04 16:15:21.773735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.659 ms 00:21:03.105 [2024-11-04 16:15:21.773758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.783863] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:03.105 [2024-11-04 16:15:21.799566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.799804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:03.105 [2024-11-04 16:15:21.799848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.726 ms 00:21:03.105 [2024-11-04 16:15:21.799861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.799996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.800011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:03.105 [2024-11-04 16:15:21.800026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:03.105 [2024-11-04 16:15:21.800037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.800094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.800107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:03.105 [2024-11-04 16:15:21.800120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:03.105 [2024-11-04 16:15:21.800132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.800164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.800181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:03.105 [2024-11-04 16:15:21.800194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:03.105 [2024-11-04 16:15:21.800206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.105 [2024-11-04 16:15:21.800246] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:03.105 [2024-11-04 16:15:21.800260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.105 [2024-11-04 16:15:21.800272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:03.105 [2024-11-04 16:15:21.800284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:03.105 [2024-11-04 16:15:21.800295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.364 [2024-11-04 16:15:21.834881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.364 [2024-11-04 16:15:21.834944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:03.364 [2024-11-04 16:15:21.834960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.616 ms 00:21:03.364 [2024-11-04 16:15:21.834971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.364 [2024-11-04 16:15:21.835095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.364 [2024-11-04 16:15:21.835109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:03.364 [2024-11-04 16:15:21.835121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:03.364 [2024-11-04 16:15:21.835132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.364 [2024-11-04 16:15:21.836135] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:03.364 [2024-11-04 16:15:21.840546] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 403.520 ms, result 0 00:21:03.364 [2024-11-04 16:15:21.841453] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:03.364 [2024-11-04 16:15:21.859230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:04.301  [2024-11-04T16:15:23.960Z] Copying: 27/256 [MB] (27 MBps) [2024-11-04T16:15:25.337Z] Copying: 50/256 [MB] (23 MBps) [2024-11-04T16:15:26.274Z] Copying: 74/256 [MB] (23 MBps) [2024-11-04T16:15:27.211Z] Copying: 98/256 [MB] (23 MBps) [2024-11-04T16:15:28.148Z] Copying: 122/256 [MB] (23 MBps) [2024-11-04T16:15:29.085Z] Copying: 145/256 [MB] (23 MBps) [2024-11-04T16:15:30.022Z] Copying: 168/256 [MB] (23 MBps) [2024-11-04T16:15:30.959Z] Copying: 192/256 [MB] (23 MBps) [2024-11-04T16:15:32.375Z] Copying: 216/256 [MB] (24 MBps) [2024-11-04T16:15:32.633Z] Copying: 240/256 [MB] (24 MBps) [2024-11-04T16:15:32.893Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-04 16:15:32.760283] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:14.171 [2024-11-04 16:15:32.782565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.171 [2024-11-04 16:15:32.782628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:14.171 [2024-11-04 16:15:32.782649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:14.171 [2024-11-04 16:15:32.782676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.171 [2024-11-04 16:15:32.782713] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:14.171 [2024-11-04 16:15:32.787256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.171 [2024-11-04 16:15:32.787295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:14.171 [2024-11-04 16:15:32.787311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.528 ms 00:21:14.171 [2024-11-04 16:15:32.787324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.171 [2024-11-04 16:15:32.787609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.171 [2024-11-04 16:15:32.787626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:14.171 [2024-11-04 16:15:32.787640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:21:14.171 [2024-11-04 16:15:32.787652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.171 [2024-11-04 16:15:32.790524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.171 [2024-11-04 16:15:32.790558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:14.171 [2024-11-04 16:15:32.790571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.847 ms 00:21:14.171 [2024-11-04 16:15:32.790582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.171 [2024-11-04 16:15:32.795963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.171 [2024-11-04 16:15:32.796011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:14.171 [2024-11-04 16:15:32.796025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.346 ms 00:21:14.171 [2024-11-04 16:15:32.796038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.171 [2024-11-04 16:15:32.831044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.171 [2024-11-04 16:15:32.831096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:14.171 [2024-11-04 16:15:32.831112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.956 ms 00:21:14.171 [2024-11-04 16:15:32.831140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.171 [2024-11-04 16:15:32.851780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.171 [2024-11-04 16:15:32.851836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:14.171 [2024-11-04 16:15:32.851869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.605 ms 00:21:14.171 [2024-11-04 16:15:32.851887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.171 [2024-11-04 16:15:32.852031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.171 [2024-11-04 16:15:32.852046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:14.171 [2024-11-04 16:15:32.852058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:14.171 [2024-11-04 16:15:32.852069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.171 [2024-11-04 16:15:32.886309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.171 [2024-11-04 16:15:32.886355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:14.171 [2024-11-04 16:15:32.886370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.258 ms 00:21:14.171 [2024-11-04 16:15:32.886381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.431 [2024-11-04 16:15:32.920572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.431 [2024-11-04 16:15:32.920614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:14.431 [2024-11-04 16:15:32.920629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.169 ms 00:21:14.431 [2024-11-04 16:15:32.920639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.431 [2024-11-04 16:15:32.953651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.431 [2024-11-04 16:15:32.953697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:14.431 [2024-11-04 16:15:32.953711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.979 ms 00:21:14.431 [2024-11-04 16:15:32.953722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.431 [2024-11-04 16:15:32.987556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.431 [2024-11-04 16:15:32.987601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:14.431 [2024-11-04 16:15:32.987615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.774 ms 00:21:14.431 [2024-11-04 16:15:32.987626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.431 [2024-11-04 16:15:32.987689] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:14.431 [2024-11-04 16:15:32.987708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:14.431 [2024-11-04 16:15:32.987722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:14.431 [2024-11-04 16:15:32.987735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:14.431 [2024-11-04 16:15:32.987764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:14.431 [2024-11-04 16:15:32.987779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:14.431 [2024-11-04 16:15:32.987791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:14.431 [2024-11-04 16:15:32.987803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:14.431 [2024-11-04 16:15:32.987815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:14.431 [2024-11-04 16:15:32.987827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:14.431 [2024-11-04 16:15:32.987839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.987851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.987863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.987875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.987887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.987899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.987910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.987922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.987934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.987945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.987974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.987987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.987999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:14.432 [2024-11-04 16:15:32.988991] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:14.432 [2024-11-04 16:15:32.989003] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84bc5547-bd10-4723-8e79-2ff33cc227b9 00:21:14.432 [2024-11-04 16:15:32.989015] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:14.432 [2024-11-04 16:15:32.989028] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:14.432 [2024-11-04 16:15:32.989039] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:14.432 [2024-11-04 16:15:32.989052] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:14.432 [2024-11-04 16:15:32.989063] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:14.432 [2024-11-04 16:15:32.989074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:14.432 [2024-11-04 16:15:32.989086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:14.432 [2024-11-04 16:15:32.989096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:14.432 [2024-11-04 16:15:32.989107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:14.432 [2024-11-04 16:15:32.989118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.432 [2024-11-04 16:15:32.989135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:14.432 [2024-11-04 16:15:32.989148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:21:14.432 [2024-11-04 16:15:32.989160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.432 [2024-11-04 16:15:33.008893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.432 [2024-11-04 16:15:33.008934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:14.432 [2024-11-04 16:15:33.008949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.740 ms 00:21:14.432 [2024-11-04 16:15:33.008961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.432 [2024-11-04 16:15:33.009563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.432 [2024-11-04 16:15:33.009589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:14.432 [2024-11-04 16:15:33.009601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:21:14.432 [2024-11-04 16:15:33.009613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.432 [2024-11-04 16:15:33.063786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.432 [2024-11-04 16:15:33.063826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:14.432 [2024-11-04 16:15:33.063841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.432 [2024-11-04 16:15:33.063852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.432 [2024-11-04 16:15:33.063953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.432 [2024-11-04 16:15:33.063967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:14.432 [2024-11-04 16:15:33.063979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.432 [2024-11-04 16:15:33.063990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.432 [2024-11-04 16:15:33.064048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.432 [2024-11-04 16:15:33.064062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:14.432 [2024-11-04 16:15:33.064074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.432 [2024-11-04 16:15:33.064086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.432 [2024-11-04 16:15:33.064107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.432 [2024-11-04 16:15:33.064124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:14.433 [2024-11-04 16:15:33.064136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.433 [2024-11-04 16:15:33.064147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.692 [2024-11-04 16:15:33.178720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.692 [2024-11-04 16:15:33.178785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:14.692 [2024-11-04 16:15:33.178802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.692 [2024-11-04 16:15:33.178830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.692 [2024-11-04 16:15:33.273863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.692 [2024-11-04 16:15:33.273922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:14.692 [2024-11-04 16:15:33.273937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.692 [2024-11-04 16:15:33.273949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.692 [2024-11-04 16:15:33.274031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.692 [2024-11-04 16:15:33.274044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:14.692 [2024-11-04 16:15:33.274056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.692 [2024-11-04 16:15:33.274068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.692 [2024-11-04 16:15:33.274099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.692 [2024-11-04 16:15:33.274112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:14.692 [2024-11-04 16:15:33.274129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.692 [2024-11-04 16:15:33.274140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.692 [2024-11-04 16:15:33.274251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.692 [2024-11-04 16:15:33.274266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:14.692 [2024-11-04 16:15:33.274278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.692 [2024-11-04 16:15:33.274290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.692 [2024-11-04 16:15:33.274349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.692 [2024-11-04 16:15:33.274363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:14.692 [2024-11-04 16:15:33.274375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.692 [2024-11-04 16:15:33.274392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.692 [2024-11-04 16:15:33.274436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.692 [2024-11-04 16:15:33.274449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:14.692 [2024-11-04 16:15:33.274460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.692 [2024-11-04 16:15:33.274472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.692 [2024-11-04 16:15:33.274527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.692 [2024-11-04 16:15:33.274541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:14.692 [2024-11-04 16:15:33.274558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.692 [2024-11-04 16:15:33.274570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.692 [2024-11-04 16:15:33.274717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 492.977 ms, result 0 00:21:15.629 00:21:15.629 00:21:15.629 16:15:34 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:16.198 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:16.198 16:15:34 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:16.198 16:15:34 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:16.198 16:15:34 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:16.198 16:15:34 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:16.198 16:15:34 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:16.198 16:15:34 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:16.198 16:15:34 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 75978 00:21:16.198 16:15:34 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75978 ']' 00:21:16.198 16:15:34 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75978 00:21:16.198 Process with pid 75978 is not found 00:21:16.198 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (75978) - No such process 00:21:16.198 16:15:34 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 75978 is not found' 00:21:16.198 00:21:16.198 real 1m9.244s 00:21:16.198 user 1m30.957s 00:21:16.198 sys 0m6.870s 00:21:16.198 16:15:34 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:16.198 ************************************ 00:21:16.198 END TEST ftl_trim 00:21:16.198 ************************************ 00:21:16.198 16:15:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:16.198 16:15:34 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:16.198 16:15:34 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:16.198 16:15:34 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:16.198 16:15:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:16.457 ************************************ 00:21:16.457 START TEST ftl_restore 00:21:16.457 ************************************ 00:21:16.457 16:15:34 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:16.457 * Looking for test storage... 00:21:16.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:16.457 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:16.457 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:21:16.457 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:16.457 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.457 16:15:35 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:16.457 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.457 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:16.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.457 --rc genhtml_branch_coverage=1 00:21:16.457 --rc genhtml_function_coverage=1 00:21:16.457 --rc genhtml_legend=1 00:21:16.457 --rc geninfo_all_blocks=1 00:21:16.457 --rc geninfo_unexecuted_blocks=1 00:21:16.457 00:21:16.457 ' 00:21:16.457 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:16.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.457 --rc genhtml_branch_coverage=1 00:21:16.457 --rc genhtml_function_coverage=1 00:21:16.457 --rc genhtml_legend=1 00:21:16.457 --rc geninfo_all_blocks=1 00:21:16.457 --rc geninfo_unexecuted_blocks=1 00:21:16.457 00:21:16.457 ' 00:21:16.457 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:16.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.457 --rc genhtml_branch_coverage=1 00:21:16.457 --rc genhtml_function_coverage=1 00:21:16.457 --rc genhtml_legend=1 00:21:16.457 --rc geninfo_all_blocks=1 00:21:16.457 --rc geninfo_unexecuted_blocks=1 00:21:16.457 00:21:16.457 ' 00:21:16.457 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:16.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.457 --rc genhtml_branch_coverage=1 00:21:16.457 --rc genhtml_function_coverage=1 00:21:16.457 --rc genhtml_legend=1 00:21:16.457 --rc geninfo_all_blocks=1 00:21:16.457 --rc geninfo_unexecuted_blocks=1 00:21:16.457 00:21:16.457 ' 00:21:16.457 16:15:35 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:16.457 16:15:35 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:16.457 16:15:35 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:16.457 16:15:35 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.F1oqkINeoQ 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76259 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76259 00:21:16.716 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 76259 ']' 00:21:16.716 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.716 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:16.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.716 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.716 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:16.716 16:15:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:16.716 16:15:35 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.716 [2024-11-04 16:15:35.307787] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:21:16.716 [2024-11-04 16:15:35.307926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76259 ] 00:21:16.976 [2024-11-04 16:15:35.491132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.976 [2024-11-04 16:15:35.592338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.912 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:17.912 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:21:17.912 16:15:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:17.912 16:15:36 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:17.912 16:15:36 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:17.912 16:15:36 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:17.912 16:15:36 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:17.912 16:15:36 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:18.172 16:15:36 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:18.172 16:15:36 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:18.172 16:15:36 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:18.172 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:21:18.172 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:18.172 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:21:18.172 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:21:18.172 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:18.431 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:18.431 { 00:21:18.431 "name": "nvme0n1", 00:21:18.431 "aliases": [ 00:21:18.431 "e8440dca-15f4-4a13-a3cc-8252efc76b3c" 00:21:18.431 ], 00:21:18.431 "product_name": "NVMe disk", 00:21:18.431 "block_size": 4096, 00:21:18.431 "num_blocks": 1310720, 00:21:18.431 "uuid": "e8440dca-15f4-4a13-a3cc-8252efc76b3c", 00:21:18.431 "numa_id": -1, 00:21:18.431 "assigned_rate_limits": { 00:21:18.431 "rw_ios_per_sec": 0, 00:21:18.431 "rw_mbytes_per_sec": 0, 00:21:18.431 "r_mbytes_per_sec": 0, 00:21:18.431 "w_mbytes_per_sec": 0 00:21:18.431 }, 00:21:18.431 "claimed": true, 00:21:18.431 "claim_type": "read_many_write_one", 00:21:18.431 "zoned": false, 00:21:18.431 "supported_io_types": { 00:21:18.431 "read": true, 00:21:18.431 "write": true, 00:21:18.431 "unmap": true, 00:21:18.431 "flush": true, 00:21:18.431 "reset": true, 00:21:18.431 "nvme_admin": true, 00:21:18.431 "nvme_io": true, 00:21:18.431 "nvme_io_md": false, 00:21:18.431 "write_zeroes": true, 00:21:18.431 "zcopy": false, 00:21:18.431 "get_zone_info": false, 00:21:18.431 "zone_management": false, 00:21:18.431 "zone_append": false, 00:21:18.431 "compare": true, 00:21:18.431 "compare_and_write": false, 00:21:18.431 "abort": true, 00:21:18.431 "seek_hole": false, 00:21:18.431 "seek_data": false, 00:21:18.431 "copy": true, 00:21:18.431 "nvme_iov_md": false 00:21:18.431 }, 00:21:18.431 "driver_specific": { 00:21:18.431 "nvme": [ 00:21:18.431 { 00:21:18.431 "pci_address": "0000:00:11.0", 00:21:18.431 "trid": { 00:21:18.431 "trtype": "PCIe", 00:21:18.431 "traddr": "0000:00:11.0" 00:21:18.431 }, 00:21:18.431 "ctrlr_data": { 00:21:18.431 "cntlid": 0, 00:21:18.431 "vendor_id": "0x1b36", 00:21:18.431 "model_number": "QEMU NVMe Ctrl", 00:21:18.431 "serial_number": "12341", 00:21:18.431 "firmware_revision": "8.0.0", 00:21:18.431 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:18.431 "oacs": { 00:21:18.432 "security": 0, 00:21:18.432 "format": 1, 00:21:18.432 "firmware": 0, 00:21:18.432 "ns_manage": 1 00:21:18.432 }, 00:21:18.432 "multi_ctrlr": false, 00:21:18.432 "ana_reporting": false 00:21:18.432 }, 00:21:18.432 "vs": { 00:21:18.432 "nvme_version": "1.4" 00:21:18.432 }, 00:21:18.432 "ns_data": { 00:21:18.432 "id": 1, 00:21:18.432 "can_share": false 00:21:18.432 } 00:21:18.432 } 00:21:18.432 ], 00:21:18.432 "mp_policy": "active_passive" 00:21:18.432 } 00:21:18.432 } 00:21:18.432 ]' 00:21:18.432 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:18.432 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:21:18.432 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:18.432 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:21:18.432 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:21:18.432 16:15:36 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:21:18.432 16:15:36 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:18.432 16:15:36 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:18.432 16:15:36 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:18.432 16:15:36 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:18.432 16:15:36 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:18.691 16:15:37 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=e1ab02a5-1cea-48ac-9fe0-60e0efe484b9 00:21:18.691 16:15:37 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:18.691 16:15:37 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e1ab02a5-1cea-48ac-9fe0-60e0efe484b9 00:21:18.950 16:15:37 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:18.950 16:15:37 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=886d6167-793d-42f4-b2c0-bc712cd604bc 00:21:18.950 16:15:37 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 886d6167-793d-42f4-b2c0-bc712cd604bc 00:21:19.209 16:15:37 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e 00:21:19.209 16:15:37 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:19.209 16:15:37 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e 00:21:19.209 16:15:37 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:19.209 16:15:37 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:19.209 16:15:37 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e 00:21:19.209 16:15:37 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:19.209 16:15:37 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e 00:21:19.209 16:15:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e 00:21:19.209 16:15:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:19.209 16:15:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:21:19.209 16:15:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:21:19.209 16:15:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e 00:21:19.468 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:19.468 { 00:21:19.468 "name": "fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e", 00:21:19.468 "aliases": [ 00:21:19.468 "lvs/nvme0n1p0" 00:21:19.468 ], 00:21:19.468 "product_name": "Logical Volume", 00:21:19.468 "block_size": 4096, 00:21:19.468 "num_blocks": 26476544, 00:21:19.468 "uuid": "fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e", 00:21:19.468 "assigned_rate_limits": { 00:21:19.468 "rw_ios_per_sec": 0, 00:21:19.468 "rw_mbytes_per_sec": 0, 00:21:19.468 "r_mbytes_per_sec": 0, 00:21:19.468 "w_mbytes_per_sec": 0 00:21:19.468 }, 00:21:19.468 "claimed": false, 00:21:19.468 "zoned": false, 00:21:19.468 "supported_io_types": { 00:21:19.468 "read": true, 00:21:19.468 "write": true, 00:21:19.468 "unmap": true, 00:21:19.468 "flush": false, 00:21:19.468 "reset": true, 00:21:19.468 "nvme_admin": false, 00:21:19.468 "nvme_io": false, 00:21:19.468 "nvme_io_md": false, 00:21:19.468 "write_zeroes": true, 00:21:19.468 "zcopy": false, 00:21:19.468 "get_zone_info": false, 00:21:19.468 "zone_management": false, 00:21:19.468 "zone_append": false, 00:21:19.468 "compare": false, 00:21:19.468 "compare_and_write": false, 00:21:19.468 "abort": false, 00:21:19.468 "seek_hole": true, 00:21:19.468 "seek_data": true, 00:21:19.468 "copy": false, 00:21:19.468 "nvme_iov_md": false 00:21:19.468 }, 00:21:19.468 "driver_specific": { 00:21:19.468 "lvol": { 00:21:19.468 "lvol_store_uuid": "886d6167-793d-42f4-b2c0-bc712cd604bc", 00:21:19.468 "base_bdev": "nvme0n1", 00:21:19.468 "thin_provision": true, 00:21:19.468 "num_allocated_clusters": 0, 00:21:19.468 "snapshot": false, 00:21:19.468 "clone": false, 00:21:19.468 "esnap_clone": false 00:21:19.468 } 00:21:19.468 } 00:21:19.468 } 00:21:19.468 ]' 00:21:19.468 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:19.468 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:21:19.468 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:19.468 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:21:19.468 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:21:19.468 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:21:19.468 16:15:38 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:19.468 16:15:38 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:19.468 16:15:38 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:19.727 16:15:38 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:19.727 16:15:38 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:19.727 16:15:38 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e 00:21:19.727 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e 00:21:19.727 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:19.727 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:21:19.727 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:21:19.727 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e 00:21:19.986 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:19.986 { 00:21:19.986 "name": "fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e", 00:21:19.987 "aliases": [ 00:21:19.987 "lvs/nvme0n1p0" 00:21:19.987 ], 00:21:19.987 "product_name": "Logical Volume", 00:21:19.987 "block_size": 4096, 00:21:19.987 "num_blocks": 26476544, 00:21:19.987 "uuid": "fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e", 00:21:19.987 "assigned_rate_limits": { 00:21:19.987 "rw_ios_per_sec": 0, 00:21:19.987 "rw_mbytes_per_sec": 0, 00:21:19.987 "r_mbytes_per_sec": 0, 00:21:19.987 "w_mbytes_per_sec": 0 00:21:19.987 }, 00:21:19.987 "claimed": false, 00:21:19.987 "zoned": false, 00:21:19.987 "supported_io_types": { 00:21:19.987 "read": true, 00:21:19.987 "write": true, 00:21:19.987 "unmap": true, 00:21:19.987 "flush": false, 00:21:19.987 "reset": true, 00:21:19.987 "nvme_admin": false, 00:21:19.987 "nvme_io": false, 00:21:19.987 "nvme_io_md": false, 00:21:19.987 "write_zeroes": true, 00:21:19.987 "zcopy": false, 00:21:19.987 "get_zone_info": false, 00:21:19.987 "zone_management": false, 00:21:19.987 "zone_append": false, 00:21:19.987 "compare": false, 00:21:19.987 "compare_and_write": false, 00:21:19.987 "abort": false, 00:21:19.987 "seek_hole": true, 00:21:19.987 "seek_data": true, 00:21:19.987 "copy": false, 00:21:19.987 "nvme_iov_md": false 00:21:19.987 }, 00:21:19.987 "driver_specific": { 00:21:19.987 "lvol": { 00:21:19.987 "lvol_store_uuid": "886d6167-793d-42f4-b2c0-bc712cd604bc", 00:21:19.987 "base_bdev": "nvme0n1", 00:21:19.987 "thin_provision": true, 00:21:19.987 "num_allocated_clusters": 0, 00:21:19.987 "snapshot": false, 00:21:19.987 "clone": false, 00:21:19.987 "esnap_clone": false 00:21:19.987 } 00:21:19.987 } 00:21:19.987 } 00:21:19.987 ]' 00:21:19.987 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:19.987 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:21:19.987 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:20.246 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:21:20.246 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:21:20.246 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:21:20.246 16:15:38 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:20.246 16:15:38 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:20.246 16:15:38 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:20.246 16:15:38 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e 00:21:20.246 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e 00:21:20.246 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:21:20.246 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:21:20.246 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:21:20.246 16:15:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e 00:21:20.504 16:15:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:21:20.504 { 00:21:20.504 "name": "fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e", 00:21:20.504 "aliases": [ 00:21:20.504 "lvs/nvme0n1p0" 00:21:20.504 ], 00:21:20.504 "product_name": "Logical Volume", 00:21:20.504 "block_size": 4096, 00:21:20.504 "num_blocks": 26476544, 00:21:20.505 "uuid": "fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e", 00:21:20.505 "assigned_rate_limits": { 00:21:20.505 "rw_ios_per_sec": 0, 00:21:20.505 "rw_mbytes_per_sec": 0, 00:21:20.505 "r_mbytes_per_sec": 0, 00:21:20.505 "w_mbytes_per_sec": 0 00:21:20.505 }, 00:21:20.505 "claimed": false, 00:21:20.505 "zoned": false, 00:21:20.505 "supported_io_types": { 00:21:20.505 "read": true, 00:21:20.505 "write": true, 00:21:20.505 "unmap": true, 00:21:20.505 "flush": false, 00:21:20.505 "reset": true, 00:21:20.505 "nvme_admin": false, 00:21:20.505 "nvme_io": false, 00:21:20.505 "nvme_io_md": false, 00:21:20.505 "write_zeroes": true, 00:21:20.505 "zcopy": false, 00:21:20.505 "get_zone_info": false, 00:21:20.505 "zone_management": false, 00:21:20.505 "zone_append": false, 00:21:20.505 "compare": false, 00:21:20.505 "compare_and_write": false, 00:21:20.505 "abort": false, 00:21:20.505 "seek_hole": true, 00:21:20.505 "seek_data": true, 00:21:20.505 "copy": false, 00:21:20.505 "nvme_iov_md": false 00:21:20.505 }, 00:21:20.505 "driver_specific": { 00:21:20.505 "lvol": { 00:21:20.505 "lvol_store_uuid": "886d6167-793d-42f4-b2c0-bc712cd604bc", 00:21:20.505 "base_bdev": "nvme0n1", 00:21:20.505 "thin_provision": true, 00:21:20.505 "num_allocated_clusters": 0, 00:21:20.505 "snapshot": false, 00:21:20.505 "clone": false, 00:21:20.505 "esnap_clone": false 00:21:20.505 } 00:21:20.505 } 00:21:20.505 } 00:21:20.505 ]' 00:21:20.505 16:15:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:21:20.505 16:15:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:21:20.505 16:15:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:21:20.765 16:15:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:21:20.765 16:15:39 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:21:20.765 16:15:39 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:21:20.765 16:15:39 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:20.765 16:15:39 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e --l2p_dram_limit 10' 00:21:20.765 16:15:39 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:20.765 16:15:39 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:20.765 16:15:39 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:20.765 16:15:39 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:20.765 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:20.765 16:15:39 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fc0a87e2-e8fa-4c8c-a1d4-b18b51b5842e --l2p_dram_limit 10 -c nvc0n1p0 00:21:20.765 [2024-11-04 16:15:39.429451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.765 [2024-11-04 16:15:39.429502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:20.765 [2024-11-04 16:15:39.429541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:20.765 [2024-11-04 16:15:39.429555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.765 [2024-11-04 16:15:39.429620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.765 [2024-11-04 16:15:39.429635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:20.765 [2024-11-04 16:15:39.429651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:20.765 [2024-11-04 16:15:39.429663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.765 [2024-11-04 16:15:39.429700] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:20.765 [2024-11-04 16:15:39.430697] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:20.765 [2024-11-04 16:15:39.430741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.765 [2024-11-04 16:15:39.430767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:20.765 [2024-11-04 16:15:39.430785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:21:20.765 [2024-11-04 16:15:39.430797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.765 [2024-11-04 16:15:39.430921] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f92d22d1-3db6-4ffd-ae00-b4e7f5d476c5 00:21:20.765 [2024-11-04 16:15:39.432400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.765 [2024-11-04 16:15:39.432441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:20.765 [2024-11-04 16:15:39.432455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:20.765 [2024-11-04 16:15:39.432470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.765 [2024-11-04 16:15:39.440185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.765 [2024-11-04 16:15:39.440222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:20.765 [2024-11-04 16:15:39.440240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.678 ms 00:21:20.765 [2024-11-04 16:15:39.440255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.765 [2024-11-04 16:15:39.440361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.765 [2024-11-04 16:15:39.440381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:20.765 [2024-11-04 16:15:39.440395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:21:20.765 [2024-11-04 16:15:39.440414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.765 [2024-11-04 16:15:39.440495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.765 [2024-11-04 16:15:39.440513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:20.765 [2024-11-04 16:15:39.440526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:20.765 [2024-11-04 16:15:39.440545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.765 [2024-11-04 16:15:39.440574] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:20.765 [2024-11-04 16:15:39.445778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.765 [2024-11-04 16:15:39.445815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:20.765 [2024-11-04 16:15:39.445849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.215 ms 00:21:20.765 [2024-11-04 16:15:39.445861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.765 [2024-11-04 16:15:39.445901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.766 [2024-11-04 16:15:39.445914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:20.766 [2024-11-04 16:15:39.445930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:20.766 [2024-11-04 16:15:39.445941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.766 [2024-11-04 16:15:39.446006] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:20.766 [2024-11-04 16:15:39.446175] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:20.766 [2024-11-04 16:15:39.446207] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:20.766 [2024-11-04 16:15:39.446223] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:20.766 [2024-11-04 16:15:39.446241] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:20.766 [2024-11-04 16:15:39.446255] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:20.766 [2024-11-04 16:15:39.446271] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:20.766 [2024-11-04 16:15:39.446283] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:20.766 [2024-11-04 16:15:39.446302] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:20.766 [2024-11-04 16:15:39.446314] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:20.766 [2024-11-04 16:15:39.446330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.766 [2024-11-04 16:15:39.446343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:20.766 [2024-11-04 16:15:39.446358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:21:20.766 [2024-11-04 16:15:39.446381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.766 [2024-11-04 16:15:39.446464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.766 [2024-11-04 16:15:39.446477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:20.766 [2024-11-04 16:15:39.446493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:20.766 [2024-11-04 16:15:39.446505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.766 [2024-11-04 16:15:39.446631] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:20.766 [2024-11-04 16:15:39.446646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:20.766 [2024-11-04 16:15:39.446661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.766 [2024-11-04 16:15:39.446674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.766 [2024-11-04 16:15:39.446690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:20.766 [2024-11-04 16:15:39.446702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:20.766 [2024-11-04 16:15:39.446717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:20.766 [2024-11-04 16:15:39.446728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:20.766 [2024-11-04 16:15:39.446743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:20.766 [2024-11-04 16:15:39.446754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.766 [2024-11-04 16:15:39.446783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:20.766 [2024-11-04 16:15:39.446795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:20.766 [2024-11-04 16:15:39.446809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.766 [2024-11-04 16:15:39.446821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:20.766 [2024-11-04 16:15:39.446836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:20.766 [2024-11-04 16:15:39.446848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.766 [2024-11-04 16:15:39.446865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:20.766 [2024-11-04 16:15:39.446876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:20.766 [2024-11-04 16:15:39.446892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.766 [2024-11-04 16:15:39.446905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:20.766 [2024-11-04 16:15:39.446920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:20.766 [2024-11-04 16:15:39.446931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.766 [2024-11-04 16:15:39.446946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:20.766 [2024-11-04 16:15:39.446957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:20.766 [2024-11-04 16:15:39.446972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.766 [2024-11-04 16:15:39.446984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:20.766 [2024-11-04 16:15:39.446999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:20.766 [2024-11-04 16:15:39.447012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.766 [2024-11-04 16:15:39.447026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:20.766 [2024-11-04 16:15:39.447038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:20.766 [2024-11-04 16:15:39.447052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.766 [2024-11-04 16:15:39.447064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:20.766 [2024-11-04 16:15:39.447082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:20.766 [2024-11-04 16:15:39.447093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.766 [2024-11-04 16:15:39.447108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:20.766 [2024-11-04 16:15:39.447119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:20.766 [2024-11-04 16:15:39.447133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.766 [2024-11-04 16:15:39.447145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:20.766 [2024-11-04 16:15:39.447160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:20.766 [2024-11-04 16:15:39.447171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.766 [2024-11-04 16:15:39.447185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:20.766 [2024-11-04 16:15:39.447196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:20.766 [2024-11-04 16:15:39.447211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.766 [2024-11-04 16:15:39.447222] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:20.766 [2024-11-04 16:15:39.447237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:20.766 [2024-11-04 16:15:39.447249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.766 [2024-11-04 16:15:39.447266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.766 [2024-11-04 16:15:39.447279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:20.766 [2024-11-04 16:15:39.447297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:20.766 [2024-11-04 16:15:39.447308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:20.766 [2024-11-04 16:15:39.447334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:20.766 [2024-11-04 16:15:39.447345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:20.766 [2024-11-04 16:15:39.447359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:20.766 [2024-11-04 16:15:39.447376] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:20.766 [2024-11-04 16:15:39.447394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.766 [2024-11-04 16:15:39.447411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:20.766 [2024-11-04 16:15:39.447427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:20.766 [2024-11-04 16:15:39.447440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:20.766 [2024-11-04 16:15:39.447456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:20.766 [2024-11-04 16:15:39.447469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:20.766 [2024-11-04 16:15:39.447484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:20.766 [2024-11-04 16:15:39.447496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:20.766 [2024-11-04 16:15:39.447511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:20.766 [2024-11-04 16:15:39.447523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:20.766 [2024-11-04 16:15:39.447541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:20.766 [2024-11-04 16:15:39.447553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:20.766 [2024-11-04 16:15:39.447568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:20.766 [2024-11-04 16:15:39.447579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:20.766 [2024-11-04 16:15:39.447597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:20.766 [2024-11-04 16:15:39.447608] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:20.766 [2024-11-04 16:15:39.447625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.766 [2024-11-04 16:15:39.447639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:20.766 [2024-11-04 16:15:39.447654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:20.766 [2024-11-04 16:15:39.447667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:20.766 [2024-11-04 16:15:39.447682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:20.766 [2024-11-04 16:15:39.447695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.767 [2024-11-04 16:15:39.447710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:20.767 [2024-11-04 16:15:39.447723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:21:20.767 [2024-11-04 16:15:39.447738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.767 [2024-11-04 16:15:39.447792] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:20.767 [2024-11-04 16:15:39.447813] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:24.963 [2024-11-04 16:15:42.911016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:42.911083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:24.963 [2024-11-04 16:15:42.911102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3468.845 ms 00:21:24.963 [2024-11-04 16:15:42.911119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:42.948295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:42.948366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.963 [2024-11-04 16:15:42.948384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.931 ms 00:21:24.963 [2024-11-04 16:15:42.948400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:42.948541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:42.948560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:24.963 [2024-11-04 16:15:42.948573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:24.963 [2024-11-04 16:15:42.948591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:42.992915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:42.992966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.963 [2024-11-04 16:15:42.992981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.326 ms 00:21:24.963 [2024-11-04 16:15:42.993015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:42.993055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:42.993075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.963 [2024-11-04 16:15:42.993088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:24.963 [2024-11-04 16:15:42.993103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:42.993608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:42.993640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.963 [2024-11-04 16:15:42.993654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:21:24.963 [2024-11-04 16:15:42.993669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:42.993788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:42.993806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.963 [2024-11-04 16:15:42.993821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:21:24.963 [2024-11-04 16:15:42.993839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:43.012813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:43.012861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.963 [2024-11-04 16:15:43.012895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.979 ms 00:21:24.963 [2024-11-04 16:15:43.012910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:43.025129] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:24.963 [2024-11-04 16:15:43.028430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:43.028463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:24.963 [2024-11-04 16:15:43.028480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.448 ms 00:21:24.963 [2024-11-04 16:15:43.028492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:43.127467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:43.127552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:24.963 [2024-11-04 16:15:43.127574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.077 ms 00:21:24.963 [2024-11-04 16:15:43.127587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:43.127792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:43.127812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:24.963 [2024-11-04 16:15:43.127831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:21:24.963 [2024-11-04 16:15:43.127843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:43.162842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:43.162888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:24.963 [2024-11-04 16:15:43.162907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.989 ms 00:21:24.963 [2024-11-04 16:15:43.162936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:43.196788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:43.196827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:24.963 [2024-11-04 16:15:43.196847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.850 ms 00:21:24.963 [2024-11-04 16:15:43.196858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:43.197549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:43.197583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:24.963 [2024-11-04 16:15:43.197601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:21:24.963 [2024-11-04 16:15:43.197613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:43.296909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:43.296951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:24.963 [2024-11-04 16:15:43.296973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.383 ms 00:21:24.963 [2024-11-04 16:15:43.297002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:43.331972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:43.332022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:24.963 [2024-11-04 16:15:43.332042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.927 ms 00:21:24.963 [2024-11-04 16:15:43.332072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:43.368224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:43.368266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:24.963 [2024-11-04 16:15:43.368302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.156 ms 00:21:24.963 [2024-11-04 16:15:43.368314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:43.403685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.963 [2024-11-04 16:15:43.403730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:24.963 [2024-11-04 16:15:43.403778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.376 ms 00:21:24.963 [2024-11-04 16:15:43.403791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.963 [2024-11-04 16:15:43.403845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.964 [2024-11-04 16:15:43.403859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:24.964 [2024-11-04 16:15:43.403878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:24.964 [2024-11-04 16:15:43.403890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.964 [2024-11-04 16:15:43.403999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.964 [2024-11-04 16:15:43.404030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:24.964 [2024-11-04 16:15:43.404050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:24.964 [2024-11-04 16:15:43.404062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.964 [2024-11-04 16:15:43.405281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3981.793 ms, result 0 00:21:24.964 { 00:21:24.964 "name": "ftl0", 00:21:24.964 "uuid": "f92d22d1-3db6-4ffd-ae00-b4e7f5d476c5" 00:21:24.964 } 00:21:24.964 16:15:43 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:24.964 16:15:43 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:24.964 16:15:43 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:24.964 16:15:43 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:25.223 [2024-11-04 16:15:43.827976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.223 [2024-11-04 16:15:43.828028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:25.223 [2024-11-04 16:15:43.828043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:25.223 [2024-11-04 16:15:43.828084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.223 [2024-11-04 16:15:43.828113] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:25.223 [2024-11-04 16:15:43.832230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.223 [2024-11-04 16:15:43.832266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:25.223 [2024-11-04 16:15:43.832284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.098 ms 00:21:25.223 [2024-11-04 16:15:43.832295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.223 [2024-11-04 16:15:43.832564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.223 [2024-11-04 16:15:43.832580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:25.223 [2024-11-04 16:15:43.832616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:21:25.223 [2024-11-04 16:15:43.832628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.223 [2024-11-04 16:15:43.835150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.223 [2024-11-04 16:15:43.835182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:25.223 [2024-11-04 16:15:43.835198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.502 ms 00:21:25.223 [2024-11-04 16:15:43.835210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.223 [2024-11-04 16:15:43.840169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.223 [2024-11-04 16:15:43.840205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:25.223 [2024-11-04 16:15:43.840242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.938 ms 00:21:25.223 [2024-11-04 16:15:43.840254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.223 [2024-11-04 16:15:43.873671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.223 [2024-11-04 16:15:43.873712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:25.223 [2024-11-04 16:15:43.873730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.392 ms 00:21:25.223 [2024-11-04 16:15:43.873741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.223 [2024-11-04 16:15:43.895052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.223 [2024-11-04 16:15:43.895092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:25.223 [2024-11-04 16:15:43.895110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.266 ms 00:21:25.223 [2024-11-04 16:15:43.895139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.223 [2024-11-04 16:15:43.895294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.223 [2024-11-04 16:15:43.895310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:25.223 [2024-11-04 16:15:43.895326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:25.223 [2024-11-04 16:15:43.895338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.223 [2024-11-04 16:15:43.929600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.223 [2024-11-04 16:15:43.929641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:25.223 [2024-11-04 16:15:43.929659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.290 ms 00:21:25.223 [2024-11-04 16:15:43.929687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.484 [2024-11-04 16:15:43.965653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.484 [2024-11-04 16:15:43.965694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:25.484 [2024-11-04 16:15:43.965712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.973 ms 00:21:25.484 [2024-11-04 16:15:43.965725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.484 [2024-11-04 16:15:44.001428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.484 [2024-11-04 16:15:44.001465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:25.484 [2024-11-04 16:15:44.001499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.698 ms 00:21:25.484 [2024-11-04 16:15:44.001511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.484 [2024-11-04 16:15:44.036789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.484 [2024-11-04 16:15:44.036825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:25.484 [2024-11-04 16:15:44.036844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.202 ms 00:21:25.484 [2024-11-04 16:15:44.036857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.484 [2024-11-04 16:15:44.036925] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:25.484 [2024-11-04 16:15:44.036945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.036962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.036975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.036991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:25.484 [2024-11-04 16:15:44.037545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.037999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:25.485 [2024-11-04 16:15:44.038444] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:25.485 [2024-11-04 16:15:44.038463] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f92d22d1-3db6-4ffd-ae00-b4e7f5d476c5 00:21:25.485 [2024-11-04 16:15:44.038476] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:25.485 [2024-11-04 16:15:44.038494] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:25.485 [2024-11-04 16:15:44.038506] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:25.485 [2024-11-04 16:15:44.038535] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:25.485 [2024-11-04 16:15:44.038547] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:25.485 [2024-11-04 16:15:44.038562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:25.485 [2024-11-04 16:15:44.038575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:25.485 [2024-11-04 16:15:44.038589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:25.485 [2024-11-04 16:15:44.038600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:25.485 [2024-11-04 16:15:44.038615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.485 [2024-11-04 16:15:44.038628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:25.485 [2024-11-04 16:15:44.038644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.695 ms 00:21:25.485 [2024-11-04 16:15:44.038657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.485 [2024-11-04 16:15:44.058231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.485 [2024-11-04 16:15:44.058267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:25.485 [2024-11-04 16:15:44.058301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.537 ms 00:21:25.485 [2024-11-04 16:15:44.058314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.485 [2024-11-04 16:15:44.058932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.485 [2024-11-04 16:15:44.058955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:25.485 [2024-11-04 16:15:44.058973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:21:25.485 [2024-11-04 16:15:44.058989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.485 [2024-11-04 16:15:44.121017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.485 [2024-11-04 16:15:44.121054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:25.485 [2024-11-04 16:15:44.121071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.485 [2024-11-04 16:15:44.121083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.485 [2024-11-04 16:15:44.121162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.485 [2024-11-04 16:15:44.121176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:25.485 [2024-11-04 16:15:44.121191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.485 [2024-11-04 16:15:44.121206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.485 [2024-11-04 16:15:44.121321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.485 [2024-11-04 16:15:44.121336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:25.485 [2024-11-04 16:15:44.121352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.485 [2024-11-04 16:15:44.121363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.485 [2024-11-04 16:15:44.121391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.485 [2024-11-04 16:15:44.121403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:25.485 [2024-11-04 16:15:44.121435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.485 [2024-11-04 16:15:44.121448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.745 [2024-11-04 16:15:44.239341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.745 [2024-11-04 16:15:44.239394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:25.745 [2024-11-04 16:15:44.239415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.745 [2024-11-04 16:15:44.239444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.745 [2024-11-04 16:15:44.333804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.745 [2024-11-04 16:15:44.333847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:25.745 [2024-11-04 16:15:44.333866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.745 [2024-11-04 16:15:44.333898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.745 [2024-11-04 16:15:44.334028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.745 [2024-11-04 16:15:44.334044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:25.745 [2024-11-04 16:15:44.334059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.745 [2024-11-04 16:15:44.334071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.745 [2024-11-04 16:15:44.334134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.745 [2024-11-04 16:15:44.334147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:25.745 [2024-11-04 16:15:44.334162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.745 [2024-11-04 16:15:44.334175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.745 [2024-11-04 16:15:44.334296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.745 [2024-11-04 16:15:44.334310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:25.745 [2024-11-04 16:15:44.334325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.746 [2024-11-04 16:15:44.334337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.746 [2024-11-04 16:15:44.334384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.746 [2024-11-04 16:15:44.334399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:25.746 [2024-11-04 16:15:44.334414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.746 [2024-11-04 16:15:44.334426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.746 [2024-11-04 16:15:44.334490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.746 [2024-11-04 16:15:44.334506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:25.746 [2024-11-04 16:15:44.334532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.746 [2024-11-04 16:15:44.334545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.746 [2024-11-04 16:15:44.334600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.746 [2024-11-04 16:15:44.334615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:25.746 [2024-11-04 16:15:44.334630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.746 [2024-11-04 16:15:44.334642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.746 [2024-11-04 16:15:44.334804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 507.590 ms, result 0 00:21:25.746 true 00:21:25.746 16:15:44 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76259 00:21:25.746 16:15:44 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 76259 ']' 00:21:25.746 16:15:44 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 76259 00:21:25.746 16:15:44 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:21:25.746 16:15:44 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:25.746 16:15:44 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76259 00:21:25.746 16:15:44 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:25.746 killing process with pid 76259 00:21:25.746 16:15:44 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:25.746 16:15:44 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76259' 00:21:25.746 16:15:44 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 76259 00:21:25.746 16:15:44 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 76259 00:21:31.030 16:15:49 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:35.222 262144+0 records in 00:21:35.222 262144+0 records out 00:21:35.222 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.12253 s, 260 MB/s 00:21:35.222 16:15:53 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:36.600 16:15:55 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:36.600 [2024-11-04 16:15:55.224737] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:21:36.600 [2024-11-04 16:15:55.224885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76499 ] 00:21:36.858 [2024-11-04 16:15:55.413536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.858 [2024-11-04 16:15:55.525310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.427 [2024-11-04 16:15:55.878435] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:37.427 [2024-11-04 16:15:55.878548] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:37.427 [2024-11-04 16:15:56.047191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.427 [2024-11-04 16:15:56.047244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:37.427 [2024-11-04 16:15:56.047284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:37.427 [2024-11-04 16:15:56.047296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.427 [2024-11-04 16:15:56.047349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.427 [2024-11-04 16:15:56.047363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.427 [2024-11-04 16:15:56.047379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:37.427 [2024-11-04 16:15:56.047390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.427 [2024-11-04 16:15:56.047414] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:37.427 [2024-11-04 16:15:56.048400] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:37.427 [2024-11-04 16:15:56.048435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.427 [2024-11-04 16:15:56.048448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.427 [2024-11-04 16:15:56.048461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:21:37.427 [2024-11-04 16:15:56.048472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.427 [2024-11-04 16:15:56.049975] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:37.427 [2024-11-04 16:15:56.068181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.427 [2024-11-04 16:15:56.068225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:37.427 [2024-11-04 16:15:56.068257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.246 ms 00:21:37.428 [2024-11-04 16:15:56.068270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.428 [2024-11-04 16:15:56.068358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.428 [2024-11-04 16:15:56.068378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:37.428 [2024-11-04 16:15:56.068390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:37.428 [2024-11-04 16:15:56.068402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.428 [2024-11-04 16:15:56.075392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.428 [2024-11-04 16:15:56.075425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.428 [2024-11-04 16:15:56.075438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.919 ms 00:21:37.428 [2024-11-04 16:15:56.075450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.428 [2024-11-04 16:15:56.075550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.428 [2024-11-04 16:15:56.075565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.428 [2024-11-04 16:15:56.075578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:37.428 [2024-11-04 16:15:56.075589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.428 [2024-11-04 16:15:56.075634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.428 [2024-11-04 16:15:56.075647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:37.428 [2024-11-04 16:15:56.075659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:37.428 [2024-11-04 16:15:56.075670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.428 [2024-11-04 16:15:56.075698] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:37.428 [2024-11-04 16:15:56.080497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.428 [2024-11-04 16:15:56.080531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.428 [2024-11-04 16:15:56.080561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.813 ms 00:21:37.428 [2024-11-04 16:15:56.080577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.428 [2024-11-04 16:15:56.080610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.428 [2024-11-04 16:15:56.080623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:37.428 [2024-11-04 16:15:56.080635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:37.428 [2024-11-04 16:15:56.080646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.428 [2024-11-04 16:15:56.080703] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:37.428 [2024-11-04 16:15:56.080729] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:37.428 [2024-11-04 16:15:56.080778] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:37.428 [2024-11-04 16:15:56.080801] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:37.428 [2024-11-04 16:15:56.080906] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:37.428 [2024-11-04 16:15:56.080922] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:37.428 [2024-11-04 16:15:56.080937] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:37.428 [2024-11-04 16:15:56.080951] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:37.428 [2024-11-04 16:15:56.080965] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:37.428 [2024-11-04 16:15:56.080978] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:37.428 [2024-11-04 16:15:56.080989] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:37.428 [2024-11-04 16:15:56.081001] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:37.428 [2024-11-04 16:15:56.081011] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:37.428 [2024-11-04 16:15:56.081028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.428 [2024-11-04 16:15:56.081040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:37.428 [2024-11-04 16:15:56.081052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:21:37.428 [2024-11-04 16:15:56.081063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.428 [2024-11-04 16:15:56.081141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.428 [2024-11-04 16:15:56.081155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:37.428 [2024-11-04 16:15:56.081166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:37.428 [2024-11-04 16:15:56.081178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.428 [2024-11-04 16:15:56.081277] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:37.428 [2024-11-04 16:15:56.081308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:37.428 [2024-11-04 16:15:56.081321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.428 [2024-11-04 16:15:56.081333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.428 [2024-11-04 16:15:56.081345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:37.428 [2024-11-04 16:15:56.081356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:37.428 [2024-11-04 16:15:56.081367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:37.428 [2024-11-04 16:15:56.081379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:37.428 [2024-11-04 16:15:56.081390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:37.428 [2024-11-04 16:15:56.081401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.428 [2024-11-04 16:15:56.081412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:37.428 [2024-11-04 16:15:56.081424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:37.428 [2024-11-04 16:15:56.081435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.428 [2024-11-04 16:15:56.081446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:37.428 [2024-11-04 16:15:56.081457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:37.428 [2024-11-04 16:15:56.081479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.428 [2024-11-04 16:15:56.081491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:37.428 [2024-11-04 16:15:56.081502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:37.428 [2024-11-04 16:15:56.081512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.428 [2024-11-04 16:15:56.081524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:37.428 [2024-11-04 16:15:56.081535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:37.428 [2024-11-04 16:15:56.081547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.428 [2024-11-04 16:15:56.081558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:37.428 [2024-11-04 16:15:56.081569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:37.428 [2024-11-04 16:15:56.081579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.428 [2024-11-04 16:15:56.081590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:37.428 [2024-11-04 16:15:56.081601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:37.428 [2024-11-04 16:15:56.081611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.428 [2024-11-04 16:15:56.081622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:37.428 [2024-11-04 16:15:56.081633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:37.428 [2024-11-04 16:15:56.081644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.428 [2024-11-04 16:15:56.081655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:37.428 [2024-11-04 16:15:56.081665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:37.428 [2024-11-04 16:15:56.081676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.428 [2024-11-04 16:15:56.081687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:37.428 [2024-11-04 16:15:56.081698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:37.428 [2024-11-04 16:15:56.081708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.428 [2024-11-04 16:15:56.081719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:37.428 [2024-11-04 16:15:56.081730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:37.428 [2024-11-04 16:15:56.081741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.428 [2024-11-04 16:15:56.081770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:37.428 [2024-11-04 16:15:56.081782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:37.428 [2024-11-04 16:15:56.081793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.428 [2024-11-04 16:15:56.081804] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:37.428 [2024-11-04 16:15:56.081817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:37.428 [2024-11-04 16:15:56.081828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.428 [2024-11-04 16:15:56.081840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.428 [2024-11-04 16:15:56.081852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:37.428 [2024-11-04 16:15:56.081864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:37.428 [2024-11-04 16:15:56.081875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:37.428 [2024-11-04 16:15:56.081887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:37.428 [2024-11-04 16:15:56.081897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:37.428 [2024-11-04 16:15:56.081908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:37.428 [2024-11-04 16:15:56.081920] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:37.428 [2024-11-04 16:15:56.081934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.428 [2024-11-04 16:15:56.081948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:37.428 [2024-11-04 16:15:56.081960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:37.428 [2024-11-04 16:15:56.081972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:37.429 [2024-11-04 16:15:56.081984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:37.429 [2024-11-04 16:15:56.081995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:37.429 [2024-11-04 16:15:56.082007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:37.429 [2024-11-04 16:15:56.082020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:37.429 [2024-11-04 16:15:56.082032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:37.429 [2024-11-04 16:15:56.082044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:37.429 [2024-11-04 16:15:56.082056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:37.429 [2024-11-04 16:15:56.082069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:37.429 [2024-11-04 16:15:56.082080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:37.429 [2024-11-04 16:15:56.082092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:37.429 [2024-11-04 16:15:56.082104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:37.429 [2024-11-04 16:15:56.082115] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:37.429 [2024-11-04 16:15:56.082132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.429 [2024-11-04 16:15:56.082145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:37.429 [2024-11-04 16:15:56.082158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:37.429 [2024-11-04 16:15:56.082170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:37.429 [2024-11-04 16:15:56.082182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:37.429 [2024-11-04 16:15:56.082196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.429 [2024-11-04 16:15:56.082208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:37.429 [2024-11-04 16:15:56.082220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:21:37.429 [2024-11-04 16:15:56.082232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.429 [2024-11-04 16:15:56.124282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.429 [2024-11-04 16:15:56.124321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.429 [2024-11-04 16:15:56.124335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.067 ms 00:21:37.429 [2024-11-04 16:15:56.124347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.429 [2024-11-04 16:15:56.124449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.429 [2024-11-04 16:15:56.124462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:37.429 [2024-11-04 16:15:56.124475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:37.429 [2024-11-04 16:15:56.124487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.688 [2024-11-04 16:15:56.201085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.688 [2024-11-04 16:15:56.201127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.688 [2024-11-04 16:15:56.201143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.662 ms 00:21:37.688 [2024-11-04 16:15:56.201171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.688 [2024-11-04 16:15:56.201215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.688 [2024-11-04 16:15:56.201228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.688 [2024-11-04 16:15:56.201241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:37.688 [2024-11-04 16:15:56.201262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.688 [2024-11-04 16:15:56.201812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.688 [2024-11-04 16:15:56.201837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.688 [2024-11-04 16:15:56.201851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:21:37.688 [2024-11-04 16:15:56.201863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.688 [2024-11-04 16:15:56.201991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.688 [2024-11-04 16:15:56.202007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.688 [2024-11-04 16:15:56.202021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:37.688 [2024-11-04 16:15:56.202045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.688 [2024-11-04 16:15:56.223187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.688 [2024-11-04 16:15:56.223224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.688 [2024-11-04 16:15:56.223263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.150 ms 00:21:37.688 [2024-11-04 16:15:56.223275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.688 [2024-11-04 16:15:56.242996] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:37.688 [2024-11-04 16:15:56.243044] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:37.688 [2024-11-04 16:15:56.243062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.688 [2024-11-04 16:15:56.243074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:37.688 [2024-11-04 16:15:56.243104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.713 ms 00:21:37.688 [2024-11-04 16:15:56.243116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.688 [2024-11-04 16:15:56.272459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.689 [2024-11-04 16:15:56.272507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:37.689 [2024-11-04 16:15:56.272531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.340 ms 00:21:37.689 [2024-11-04 16:15:56.272559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.689 [2024-11-04 16:15:56.290675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.689 [2024-11-04 16:15:56.290728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:37.689 [2024-11-04 16:15:56.290758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.089 ms 00:21:37.689 [2024-11-04 16:15:56.290781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.689 [2024-11-04 16:15:56.308263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.689 [2024-11-04 16:15:56.308301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:37.689 [2024-11-04 16:15:56.308315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.467 ms 00:21:37.689 [2024-11-04 16:15:56.308326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.689 [2024-11-04 16:15:56.309198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.689 [2024-11-04 16:15:56.309234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:37.689 [2024-11-04 16:15:56.309248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:21:37.689 [2024-11-04 16:15:56.309259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.689 [2024-11-04 16:15:56.391659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.689 [2024-11-04 16:15:56.391727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:37.689 [2024-11-04 16:15:56.391769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.501 ms 00:21:37.689 [2024-11-04 16:15:56.391796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.689 [2024-11-04 16:15:56.402466] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:37.689 [2024-11-04 16:15:56.404927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.689 [2024-11-04 16:15:56.404961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:37.689 [2024-11-04 16:15:56.404976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.100 ms 00:21:37.689 [2024-11-04 16:15:56.404987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.689 [2024-11-04 16:15:56.405100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.689 [2024-11-04 16:15:56.405116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:37.689 [2024-11-04 16:15:56.405129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:37.689 [2024-11-04 16:15:56.405141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.689 [2024-11-04 16:15:56.405228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.689 [2024-11-04 16:15:56.405241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:37.689 [2024-11-04 16:15:56.405254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:37.689 [2024-11-04 16:15:56.405266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.689 [2024-11-04 16:15:56.405294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.689 [2024-11-04 16:15:56.405308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:37.689 [2024-11-04 16:15:56.405320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:37.689 [2024-11-04 16:15:56.405331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.689 [2024-11-04 16:15:56.405376] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:37.689 [2024-11-04 16:15:56.405390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.689 [2024-11-04 16:15:56.405411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:37.689 [2024-11-04 16:15:56.405424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:37.689 [2024-11-04 16:15:56.405435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.948 [2024-11-04 16:15:56.441647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.948 [2024-11-04 16:15:56.441695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:37.948 [2024-11-04 16:15:56.441711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.243 ms 00:21:37.948 [2024-11-04 16:15:56.441740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.948 [2024-11-04 16:15:56.441850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.948 [2024-11-04 16:15:56.441865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:37.948 [2024-11-04 16:15:56.441878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:37.948 [2024-11-04 16:15:56.441889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.948 [2024-11-04 16:15:56.443235] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.186 ms, result 0 00:21:38.885  [2024-11-04T16:15:58.544Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-04T16:15:59.482Z] Copying: 44/1024 [MB] (22 MBps) [2024-11-04T16:16:00.859Z] Copying: 67/1024 [MB] (23 MBps) [2024-11-04T16:16:01.796Z] Copying: 92/1024 [MB] (24 MBps) [2024-11-04T16:16:02.734Z] Copying: 116/1024 [MB] (23 MBps) [2024-11-04T16:16:03.671Z] Copying: 139/1024 [MB] (23 MBps) [2024-11-04T16:16:04.609Z] Copying: 162/1024 [MB] (23 MBps) [2024-11-04T16:16:05.546Z] Copying: 186/1024 [MB] (24 MBps) [2024-11-04T16:16:06.485Z] Copying: 210/1024 [MB] (23 MBps) [2024-11-04T16:16:07.862Z] Copying: 233/1024 [MB] (23 MBps) [2024-11-04T16:16:08.800Z] Copying: 257/1024 [MB] (23 MBps) [2024-11-04T16:16:09.739Z] Copying: 281/1024 [MB] (23 MBps) [2024-11-04T16:16:10.676Z] Copying: 304/1024 [MB] (22 MBps) [2024-11-04T16:16:11.613Z] Copying: 327/1024 [MB] (23 MBps) [2024-11-04T16:16:12.552Z] Copying: 350/1024 [MB] (22 MBps) [2024-11-04T16:16:13.490Z] Copying: 373/1024 [MB] (23 MBps) [2024-11-04T16:16:14.429Z] Copying: 397/1024 [MB] (24 MBps) [2024-11-04T16:16:15.806Z] Copying: 421/1024 [MB] (23 MBps) [2024-11-04T16:16:16.743Z] Copying: 445/1024 [MB] (24 MBps) [2024-11-04T16:16:17.680Z] Copying: 469/1024 [MB] (23 MBps) [2024-11-04T16:16:18.617Z] Copying: 492/1024 [MB] (22 MBps) [2024-11-04T16:16:19.554Z] Copying: 516/1024 [MB] (24 MBps) [2024-11-04T16:16:20.489Z] Copying: 540/1024 [MB] (23 MBps) [2024-11-04T16:16:21.434Z] Copying: 563/1024 [MB] (23 MBps) [2024-11-04T16:16:22.813Z] Copying: 587/1024 [MB] (23 MBps) [2024-11-04T16:16:23.755Z] Copying: 611/1024 [MB] (23 MBps) [2024-11-04T16:16:24.694Z] Copying: 635/1024 [MB] (23 MBps) [2024-11-04T16:16:25.631Z] Copying: 657/1024 [MB] (21 MBps) [2024-11-04T16:16:26.568Z] Copying: 681/1024 [MB] (23 MBps) [2024-11-04T16:16:27.504Z] Copying: 704/1024 [MB] (23 MBps) [2024-11-04T16:16:28.442Z] Copying: 726/1024 [MB] (22 MBps) [2024-11-04T16:16:29.823Z] Copying: 750/1024 [MB] (23 MBps) [2024-11-04T16:16:30.764Z] Copying: 774/1024 [MB] (23 MBps) [2024-11-04T16:16:31.701Z] Copying: 798/1024 [MB] (23 MBps) [2024-11-04T16:16:32.642Z] Copying: 822/1024 [MB] (24 MBps) [2024-11-04T16:16:33.580Z] Copying: 846/1024 [MB] (24 MBps) [2024-11-04T16:16:34.516Z] Copying: 870/1024 [MB] (23 MBps) [2024-11-04T16:16:35.453Z] Copying: 894/1024 [MB] (23 MBps) [2024-11-04T16:16:36.390Z] Copying: 917/1024 [MB] (23 MBps) [2024-11-04T16:16:37.769Z] Copying: 939/1024 [MB] (21 MBps) [2024-11-04T16:16:38.706Z] Copying: 962/1024 [MB] (22 MBps) [2024-11-04T16:16:39.642Z] Copying: 985/1024 [MB] (23 MBps) [2024-11-04T16:16:40.210Z] Copying: 1007/1024 [MB] (22 MBps) [2024-11-04T16:16:40.210Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-04 16:16:40.081168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.488 [2024-11-04 16:16:40.081221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:21.488 [2024-11-04 16:16:40.081239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:21.488 [2024-11-04 16:16:40.081252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.488 [2024-11-04 16:16:40.081276] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:21.488 [2024-11-04 16:16:40.085391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.488 [2024-11-04 16:16:40.085431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:21.488 [2024-11-04 16:16:40.085447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.102 ms 00:22:21.488 [2024-11-04 16:16:40.085459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.488 [2024-11-04 16:16:40.087347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.488 [2024-11-04 16:16:40.087393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:21.488 [2024-11-04 16:16:40.087407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.853 ms 00:22:21.488 [2024-11-04 16:16:40.087419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.488 [2024-11-04 16:16:40.105095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.488 [2024-11-04 16:16:40.105141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:21.488 [2024-11-04 16:16:40.105156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.683 ms 00:22:21.488 [2024-11-04 16:16:40.105167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.488 [2024-11-04 16:16:40.110037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.489 [2024-11-04 16:16:40.110080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:21.489 [2024-11-04 16:16:40.110092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.822 ms 00:22:21.489 [2024-11-04 16:16:40.110103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.489 [2024-11-04 16:16:40.145013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.489 [2024-11-04 16:16:40.145060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:21.489 [2024-11-04 16:16:40.145076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.888 ms 00:22:21.489 [2024-11-04 16:16:40.145087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.489 [2024-11-04 16:16:40.165995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.489 [2024-11-04 16:16:40.166040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:21.489 [2024-11-04 16:16:40.166062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.881 ms 00:22:21.489 [2024-11-04 16:16:40.166075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.489 [2024-11-04 16:16:40.166204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.489 [2024-11-04 16:16:40.166221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:21.489 [2024-11-04 16:16:40.166239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:21.489 [2024-11-04 16:16:40.166250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.489 [2024-11-04 16:16:40.202385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.489 [2024-11-04 16:16:40.202429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:21.489 [2024-11-04 16:16:40.202444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.174 ms 00:22:21.489 [2024-11-04 16:16:40.202455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.749 [2024-11-04 16:16:40.238120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.749 [2024-11-04 16:16:40.238162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:21.749 [2024-11-04 16:16:40.238192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.661 ms 00:22:21.749 [2024-11-04 16:16:40.238220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.749 [2024-11-04 16:16:40.273044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.749 [2024-11-04 16:16:40.273088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:21.749 [2024-11-04 16:16:40.273120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.837 ms 00:22:21.749 [2024-11-04 16:16:40.273131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.749 [2024-11-04 16:16:40.307229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.749 [2024-11-04 16:16:40.307272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:21.749 [2024-11-04 16:16:40.307286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.072 ms 00:22:21.749 [2024-11-04 16:16:40.307297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.749 [2024-11-04 16:16:40.307354] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:21.749 [2024-11-04 16:16:40.307372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.307993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.308005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.308017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.308029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.308041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.308053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.308065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:21.749 [2024-11-04 16:16:40.308077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:21.750 [2024-11-04 16:16:40.308616] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:21.750 [2024-11-04 16:16:40.308635] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f92d22d1-3db6-4ffd-ae00-b4e7f5d476c5 00:22:21.750 [2024-11-04 16:16:40.308648] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:21.750 [2024-11-04 16:16:40.308664] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:21.750 [2024-11-04 16:16:40.308675] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:21.750 [2024-11-04 16:16:40.308687] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:21.750 [2024-11-04 16:16:40.308698] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:21.750 [2024-11-04 16:16:40.308710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:21.750 [2024-11-04 16:16:40.308722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:21.750 [2024-11-04 16:16:40.308745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:21.750 [2024-11-04 16:16:40.308766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:21.750 [2024-11-04 16:16:40.308777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.750 [2024-11-04 16:16:40.308789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:21.750 [2024-11-04 16:16:40.308802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.426 ms 00:22:21.750 [2024-11-04 16:16:40.308813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.750 [2024-11-04 16:16:40.328108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.750 [2024-11-04 16:16:40.328148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:21.750 [2024-11-04 16:16:40.328162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.286 ms 00:22:21.750 [2024-11-04 16:16:40.328173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.750 [2024-11-04 16:16:40.328709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.750 [2024-11-04 16:16:40.328729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:21.750 [2024-11-04 16:16:40.328742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:22:21.750 [2024-11-04 16:16:40.328772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.750 [2024-11-04 16:16:40.377783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.750 [2024-11-04 16:16:40.377821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:21.750 [2024-11-04 16:16:40.377853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.750 [2024-11-04 16:16:40.377866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.750 [2024-11-04 16:16:40.377922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.750 [2024-11-04 16:16:40.377934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:21.750 [2024-11-04 16:16:40.377946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.750 [2024-11-04 16:16:40.377957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.750 [2024-11-04 16:16:40.378059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.750 [2024-11-04 16:16:40.378075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:21.750 [2024-11-04 16:16:40.378087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.750 [2024-11-04 16:16:40.378098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.750 [2024-11-04 16:16:40.378118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.750 [2024-11-04 16:16:40.378130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:21.750 [2024-11-04 16:16:40.378141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.750 [2024-11-04 16:16:40.378153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.010 [2024-11-04 16:16:40.495302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.010 [2024-11-04 16:16:40.495357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:22.010 [2024-11-04 16:16:40.495389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.010 [2024-11-04 16:16:40.495401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.010 [2024-11-04 16:16:40.589988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.010 [2024-11-04 16:16:40.590057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:22.010 [2024-11-04 16:16:40.590072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.010 [2024-11-04 16:16:40.590101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.010 [2024-11-04 16:16:40.590204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.010 [2024-11-04 16:16:40.590227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:22.010 [2024-11-04 16:16:40.590241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.010 [2024-11-04 16:16:40.590252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.010 [2024-11-04 16:16:40.590292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.010 [2024-11-04 16:16:40.590305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:22.010 [2024-11-04 16:16:40.590317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.010 [2024-11-04 16:16:40.590328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.010 [2024-11-04 16:16:40.590437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.010 [2024-11-04 16:16:40.590477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:22.010 [2024-11-04 16:16:40.590490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.010 [2024-11-04 16:16:40.590502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.010 [2024-11-04 16:16:40.590554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.010 [2024-11-04 16:16:40.590568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:22.010 [2024-11-04 16:16:40.590580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.010 [2024-11-04 16:16:40.590591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.010 [2024-11-04 16:16:40.590631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.010 [2024-11-04 16:16:40.590643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:22.010 [2024-11-04 16:16:40.590664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.010 [2024-11-04 16:16:40.590676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.010 [2024-11-04 16:16:40.590721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.010 [2024-11-04 16:16:40.590735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:22.010 [2024-11-04 16:16:40.590766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.010 [2024-11-04 16:16:40.590779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.010 [2024-11-04 16:16:40.590922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 510.537 ms, result 0 00:22:23.388 00:22:23.388 00:22:23.388 16:16:41 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:23.388 [2024-11-04 16:16:41.844327] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:22:23.388 [2024-11-04 16:16:41.844438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76974 ] 00:22:23.388 [2024-11-04 16:16:42.025098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.648 [2024-11-04 16:16:42.128521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.907 [2024-11-04 16:16:42.472841] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:23.907 [2024-11-04 16:16:42.472912] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:24.168 [2024-11-04 16:16:42.634839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.168 [2024-11-04 16:16:42.634907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:24.168 [2024-11-04 16:16:42.634947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:24.168 [2024-11-04 16:16:42.634960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.168 [2024-11-04 16:16:42.635013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.168 [2024-11-04 16:16:42.635027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:24.168 [2024-11-04 16:16:42.635046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:24.168 [2024-11-04 16:16:42.635057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.168 [2024-11-04 16:16:42.635082] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:24.168 [2024-11-04 16:16:42.636016] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:24.168 [2024-11-04 16:16:42.636049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.168 [2024-11-04 16:16:42.636063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:24.168 [2024-11-04 16:16:42.636075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:22:24.168 [2024-11-04 16:16:42.636087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.168 [2024-11-04 16:16:42.637574] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:24.168 [2024-11-04 16:16:42.655969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.168 [2024-11-04 16:16:42.656016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:24.168 [2024-11-04 16:16:42.656032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.425 ms 00:22:24.168 [2024-11-04 16:16:42.656044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.168 [2024-11-04 16:16:42.656131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.168 [2024-11-04 16:16:42.656145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:24.168 [2024-11-04 16:16:42.656158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:24.168 [2024-11-04 16:16:42.656169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.168 [2024-11-04 16:16:42.663201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.168 [2024-11-04 16:16:42.663232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:24.168 [2024-11-04 16:16:42.663245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.963 ms 00:22:24.168 [2024-11-04 16:16:42.663257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.168 [2024-11-04 16:16:42.663360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.168 [2024-11-04 16:16:42.663375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:24.168 [2024-11-04 16:16:42.663387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:24.168 [2024-11-04 16:16:42.663399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.168 [2024-11-04 16:16:42.663444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.168 [2024-11-04 16:16:42.663457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:24.168 [2024-11-04 16:16:42.663469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:24.168 [2024-11-04 16:16:42.663480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.168 [2024-11-04 16:16:42.663508] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:24.168 [2024-11-04 16:16:42.668209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.168 [2024-11-04 16:16:42.668247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:24.168 [2024-11-04 16:16:42.668261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.715 ms 00:22:24.168 [2024-11-04 16:16:42.668277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.168 [2024-11-04 16:16:42.668310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.168 [2024-11-04 16:16:42.668324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:24.168 [2024-11-04 16:16:42.668336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:24.168 [2024-11-04 16:16:42.668348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.168 [2024-11-04 16:16:42.668404] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:24.168 [2024-11-04 16:16:42.668428] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:24.168 [2024-11-04 16:16:42.668465] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:24.168 [2024-11-04 16:16:42.668487] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:24.168 [2024-11-04 16:16:42.668595] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:24.168 [2024-11-04 16:16:42.668611] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:24.168 [2024-11-04 16:16:42.668626] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:24.168 [2024-11-04 16:16:42.668641] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:24.168 [2024-11-04 16:16:42.668655] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:24.168 [2024-11-04 16:16:42.668668] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:24.168 [2024-11-04 16:16:42.668680] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:24.168 [2024-11-04 16:16:42.668691] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:24.168 [2024-11-04 16:16:42.668703] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:24.168 [2024-11-04 16:16:42.668719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.168 [2024-11-04 16:16:42.668730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:24.168 [2024-11-04 16:16:42.668742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:22:24.168 [2024-11-04 16:16:42.668754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.168 [2024-11-04 16:16:42.668845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.168 [2024-11-04 16:16:42.668859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:24.168 [2024-11-04 16:16:42.668871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:24.168 [2024-11-04 16:16:42.668882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.168 [2024-11-04 16:16:42.668982] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:24.168 [2024-11-04 16:16:42.669004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:24.168 [2024-11-04 16:16:42.669016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.168 [2024-11-04 16:16:42.669028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.168 [2024-11-04 16:16:42.669040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:24.168 [2024-11-04 16:16:42.669051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:24.168 [2024-11-04 16:16:42.669062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:24.168 [2024-11-04 16:16:42.669074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:24.168 [2024-11-04 16:16:42.669085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:24.168 [2024-11-04 16:16:42.669096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.168 [2024-11-04 16:16:42.669107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:24.168 [2024-11-04 16:16:42.669118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:24.168 [2024-11-04 16:16:42.669129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.168 [2024-11-04 16:16:42.669141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:24.168 [2024-11-04 16:16:42.669152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:24.168 [2024-11-04 16:16:42.669173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.168 [2024-11-04 16:16:42.669184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:24.168 [2024-11-04 16:16:42.669196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:24.168 [2024-11-04 16:16:42.669207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.168 [2024-11-04 16:16:42.669219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:24.168 [2024-11-04 16:16:42.669230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:24.168 [2024-11-04 16:16:42.669241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.168 [2024-11-04 16:16:42.669252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:24.168 [2024-11-04 16:16:42.669263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:24.168 [2024-11-04 16:16:42.669274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.168 [2024-11-04 16:16:42.669285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:24.168 [2024-11-04 16:16:42.669295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:24.168 [2024-11-04 16:16:42.669306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.168 [2024-11-04 16:16:42.669317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:24.168 [2024-11-04 16:16:42.669328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:24.168 [2024-11-04 16:16:42.669338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.168 [2024-11-04 16:16:42.669349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:24.168 [2024-11-04 16:16:42.669360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:24.168 [2024-11-04 16:16:42.669370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.168 [2024-11-04 16:16:42.669381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:24.168 [2024-11-04 16:16:42.669392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:24.169 [2024-11-04 16:16:42.669403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.169 [2024-11-04 16:16:42.669414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:24.169 [2024-11-04 16:16:42.669424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:24.169 [2024-11-04 16:16:42.669435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.169 [2024-11-04 16:16:42.669445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:24.169 [2024-11-04 16:16:42.669456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:24.169 [2024-11-04 16:16:42.669466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.169 [2024-11-04 16:16:42.669479] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:24.169 [2024-11-04 16:16:42.669491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:24.169 [2024-11-04 16:16:42.669502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.169 [2024-11-04 16:16:42.669513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.169 [2024-11-04 16:16:42.669525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:24.169 [2024-11-04 16:16:42.669536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:24.169 [2024-11-04 16:16:42.669547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:24.169 [2024-11-04 16:16:42.669558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:24.169 [2024-11-04 16:16:42.669569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:24.169 [2024-11-04 16:16:42.669580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:24.169 [2024-11-04 16:16:42.669593] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:24.169 [2024-11-04 16:16:42.669607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.169 [2024-11-04 16:16:42.669620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:24.169 [2024-11-04 16:16:42.669632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:24.169 [2024-11-04 16:16:42.669644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:24.169 [2024-11-04 16:16:42.669655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:24.169 [2024-11-04 16:16:42.669668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:24.169 [2024-11-04 16:16:42.669679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:24.169 [2024-11-04 16:16:42.669691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:24.169 [2024-11-04 16:16:42.669703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:24.169 [2024-11-04 16:16:42.669714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:24.169 [2024-11-04 16:16:42.669726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:24.169 [2024-11-04 16:16:42.669738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:24.169 [2024-11-04 16:16:42.669764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:24.169 [2024-11-04 16:16:42.669777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:24.169 [2024-11-04 16:16:42.669789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:24.169 [2024-11-04 16:16:42.669800] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:24.169 [2024-11-04 16:16:42.669818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.169 [2024-11-04 16:16:42.669832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:24.169 [2024-11-04 16:16:42.669844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:24.169 [2024-11-04 16:16:42.669856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:24.169 [2024-11-04 16:16:42.669868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:24.169 [2024-11-04 16:16:42.669882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.169 [2024-11-04 16:16:42.669894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:24.169 [2024-11-04 16:16:42.669906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:22:24.169 [2024-11-04 16:16:42.669917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.169 [2024-11-04 16:16:42.706283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.169 [2024-11-04 16:16:42.706325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:24.169 [2024-11-04 16:16:42.706340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.374 ms 00:22:24.169 [2024-11-04 16:16:42.706352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.169 [2024-11-04 16:16:42.706451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.169 [2024-11-04 16:16:42.706465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:24.169 [2024-11-04 16:16:42.706477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:24.169 [2024-11-04 16:16:42.706489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.169 [2024-11-04 16:16:42.770431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.169 [2024-11-04 16:16:42.770474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:24.169 [2024-11-04 16:16:42.770489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.976 ms 00:22:24.169 [2024-11-04 16:16:42.770501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.169 [2024-11-04 16:16:42.770567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.169 [2024-11-04 16:16:42.770580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:24.169 [2024-11-04 16:16:42.770594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:24.169 [2024-11-04 16:16:42.770611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.169 [2024-11-04 16:16:42.771142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.169 [2024-11-04 16:16:42.771168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:24.169 [2024-11-04 16:16:42.771182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:22:24.169 [2024-11-04 16:16:42.771193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.169 [2024-11-04 16:16:42.771318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.169 [2024-11-04 16:16:42.771334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:24.169 [2024-11-04 16:16:42.771346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:24.169 [2024-11-04 16:16:42.771364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.169 [2024-11-04 16:16:42.789416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.169 [2024-11-04 16:16:42.789456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:24.169 [2024-11-04 16:16:42.789475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.055 ms 00:22:24.169 [2024-11-04 16:16:42.789502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.169 [2024-11-04 16:16:42.808002] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:24.169 [2024-11-04 16:16:42.808060] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:24.169 [2024-11-04 16:16:42.808077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.169 [2024-11-04 16:16:42.808088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:24.169 [2024-11-04 16:16:42.808118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.499 ms 00:22:24.169 [2024-11-04 16:16:42.808129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.169 [2024-11-04 16:16:42.836506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.169 [2024-11-04 16:16:42.836562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:24.169 [2024-11-04 16:16:42.836578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.375 ms 00:22:24.169 [2024-11-04 16:16:42.836591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.169 [2024-11-04 16:16:42.854932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.169 [2024-11-04 16:16:42.854976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:24.169 [2024-11-04 16:16:42.854991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.322 ms 00:22:24.169 [2024-11-04 16:16:42.855003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.169 [2024-11-04 16:16:42.872831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.169 [2024-11-04 16:16:42.872875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:24.169 [2024-11-04 16:16:42.872891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.813 ms 00:22:24.169 [2024-11-04 16:16:42.872902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.169 [2024-11-04 16:16:42.873652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.169 [2024-11-04 16:16:42.873690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:24.169 [2024-11-04 16:16:42.873705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:22:24.169 [2024-11-04 16:16:42.873722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.435 [2024-11-04 16:16:42.954366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.435 [2024-11-04 16:16:42.954455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:24.435 [2024-11-04 16:16:42.954498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.749 ms 00:22:24.435 [2024-11-04 16:16:42.954511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.435 [2024-11-04 16:16:42.964751] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:24.435 [2024-11-04 16:16:42.967228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.435 [2024-11-04 16:16:42.967261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:24.435 [2024-11-04 16:16:42.967293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.680 ms 00:22:24.435 [2024-11-04 16:16:42.967305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.435 [2024-11-04 16:16:42.967391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.435 [2024-11-04 16:16:42.967405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:24.435 [2024-11-04 16:16:42.967418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:24.435 [2024-11-04 16:16:42.967434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.435 [2024-11-04 16:16:42.967512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.435 [2024-11-04 16:16:42.967526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:24.435 [2024-11-04 16:16:42.967538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:24.435 [2024-11-04 16:16:42.967550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.435 [2024-11-04 16:16:42.967591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.435 [2024-11-04 16:16:42.967604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:24.435 [2024-11-04 16:16:42.967616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:24.435 [2024-11-04 16:16:42.967628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.435 [2024-11-04 16:16:42.967678] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:24.435 [2024-11-04 16:16:42.967697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.435 [2024-11-04 16:16:42.967709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:24.435 [2024-11-04 16:16:42.967721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:24.435 [2024-11-04 16:16:42.967732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.435 [2024-11-04 16:16:43.003186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.435 [2024-11-04 16:16:43.003229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:24.435 [2024-11-04 16:16:43.003244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.484 ms 00:22:24.435 [2024-11-04 16:16:43.003279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.435 [2024-11-04 16:16:43.003358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.435 [2024-11-04 16:16:43.003372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:24.435 [2024-11-04 16:16:43.003385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:24.435 [2024-11-04 16:16:43.003396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.435 [2024-11-04 16:16:43.004577] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 369.892 ms, result 0 00:22:25.822  [2024-11-04T16:16:45.481Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-04T16:16:46.417Z] Copying: 47/1024 [MB] (23 MBps) [2024-11-04T16:16:47.353Z] Copying: 71/1024 [MB] (23 MBps) [2024-11-04T16:16:48.290Z] Copying: 95/1024 [MB] (23 MBps) [2024-11-04T16:16:49.225Z] Copying: 118/1024 [MB] (23 MBps) [2024-11-04T16:16:50.602Z] Copying: 142/1024 [MB] (23 MBps) [2024-11-04T16:16:51.539Z] Copying: 166/1024 [MB] (24 MBps) [2024-11-04T16:16:52.472Z] Copying: 190/1024 [MB] (23 MBps) [2024-11-04T16:16:53.407Z] Copying: 214/1024 [MB] (24 MBps) [2024-11-04T16:16:54.343Z] Copying: 238/1024 [MB] (23 MBps) [2024-11-04T16:16:55.314Z] Copying: 261/1024 [MB] (22 MBps) [2024-11-04T16:16:56.251Z] Copying: 284/1024 [MB] (23 MBps) [2024-11-04T16:16:57.629Z] Copying: 307/1024 [MB] (23 MBps) [2024-11-04T16:16:58.197Z] Copying: 332/1024 [MB] (24 MBps) [2024-11-04T16:16:59.575Z] Copying: 356/1024 [MB] (24 MBps) [2024-11-04T16:17:00.515Z] Copying: 381/1024 [MB] (24 MBps) [2024-11-04T16:17:01.452Z] Copying: 405/1024 [MB] (23 MBps) [2024-11-04T16:17:02.388Z] Copying: 428/1024 [MB] (23 MBps) [2024-11-04T16:17:03.325Z] Copying: 452/1024 [MB] (23 MBps) [2024-11-04T16:17:04.262Z] Copying: 476/1024 [MB] (24 MBps) [2024-11-04T16:17:05.198Z] Copying: 501/1024 [MB] (24 MBps) [2024-11-04T16:17:06.581Z] Copying: 526/1024 [MB] (24 MBps) [2024-11-04T16:17:07.518Z] Copying: 550/1024 [MB] (24 MBps) [2024-11-04T16:17:08.455Z] Copying: 575/1024 [MB] (24 MBps) [2024-11-04T16:17:09.394Z] Copying: 599/1024 [MB] (24 MBps) [2024-11-04T16:17:10.333Z] Copying: 624/1024 [MB] (24 MBps) [2024-11-04T16:17:11.270Z] Copying: 650/1024 [MB] (26 MBps) [2024-11-04T16:17:12.222Z] Copying: 676/1024 [MB] (26 MBps) [2024-11-04T16:17:13.599Z] Copying: 703/1024 [MB] (26 MBps) [2024-11-04T16:17:14.167Z] Copying: 729/1024 [MB] (25 MBps) [2024-11-04T16:17:15.545Z] Copying: 755/1024 [MB] (26 MBps) [2024-11-04T16:17:16.481Z] Copying: 781/1024 [MB] (26 MBps) [2024-11-04T16:17:17.424Z] Copying: 808/1024 [MB] (26 MBps) [2024-11-04T16:17:18.360Z] Copying: 833/1024 [MB] (25 MBps) [2024-11-04T16:17:19.298Z] Copying: 859/1024 [MB] (26 MBps) [2024-11-04T16:17:20.238Z] Copying: 885/1024 [MB] (26 MBps) [2024-11-04T16:17:21.173Z] Copying: 912/1024 [MB] (26 MBps) [2024-11-04T16:17:22.551Z] Copying: 938/1024 [MB] (26 MBps) [2024-11-04T16:17:23.487Z] Copying: 965/1024 [MB] (26 MBps) [2024-11-04T16:17:24.423Z] Copying: 990/1024 [MB] (25 MBps) [2024-11-04T16:17:24.682Z] Copying: 1016/1024 [MB] (25 MBps) [2024-11-04T16:17:24.941Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-04 16:17:24.893575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.220 [2024-11-04 16:17:24.893670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:06.220 [2024-11-04 16:17:24.893700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:06.220 [2024-11-04 16:17:24.893723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.220 [2024-11-04 16:17:24.893782] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:06.220 [2024-11-04 16:17:24.899307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.220 [2024-11-04 16:17:24.899368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:06.220 [2024-11-04 16:17:24.899404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.498 ms 00:23:06.220 [2024-11-04 16:17:24.899425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.220 [2024-11-04 16:17:24.899795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.220 [2024-11-04 16:17:24.899836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:06.220 [2024-11-04 16:17:24.899860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:23:06.220 [2024-11-04 16:17:24.899882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.220 [2024-11-04 16:17:24.903579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.220 [2024-11-04 16:17:24.903616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:06.220 [2024-11-04 16:17:24.903632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.673 ms 00:23:06.220 [2024-11-04 16:17:24.903647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.220 [2024-11-04 16:17:24.909384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.220 [2024-11-04 16:17:24.909424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:06.220 [2024-11-04 16:17:24.909437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.711 ms 00:23:06.220 [2024-11-04 16:17:24.909447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.480 [2024-11-04 16:17:24.943790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.480 [2024-11-04 16:17:24.943830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:06.480 [2024-11-04 16:17:24.943860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.324 ms 00:23:06.480 [2024-11-04 16:17:24.943870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.480 [2024-11-04 16:17:24.963415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.480 [2024-11-04 16:17:24.963455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:06.480 [2024-11-04 16:17:24.963468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.538 ms 00:23:06.480 [2024-11-04 16:17:24.963478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.480 [2024-11-04 16:17:24.963615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.480 [2024-11-04 16:17:24.963635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:06.480 [2024-11-04 16:17:24.963645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:06.480 [2024-11-04 16:17:24.963655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.480 [2024-11-04 16:17:24.998456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.480 [2024-11-04 16:17:24.998489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:06.480 [2024-11-04 16:17:24.998502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.841 ms 00:23:06.480 [2024-11-04 16:17:24.998510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.480 [2024-11-04 16:17:25.032666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.480 [2024-11-04 16:17:25.032713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:06.480 [2024-11-04 16:17:25.032725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.151 ms 00:23:06.480 [2024-11-04 16:17:25.032734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.480 [2024-11-04 16:17:25.065774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.480 [2024-11-04 16:17:25.065807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:06.480 [2024-11-04 16:17:25.065819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.034 ms 00:23:06.480 [2024-11-04 16:17:25.065828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.480 [2024-11-04 16:17:25.099506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.480 [2024-11-04 16:17:25.099544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:06.480 [2024-11-04 16:17:25.099556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.645 ms 00:23:06.480 [2024-11-04 16:17:25.099565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.480 [2024-11-04 16:17:25.099616] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:06.480 [2024-11-04 16:17:25.099633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:06.480 [2024-11-04 16:17:25.099657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:06.480 [2024-11-04 16:17:25.099668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.099997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:06.481 [2024-11-04 16:17:25.100623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:06.482 [2024-11-04 16:17:25.100633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:06.482 [2024-11-04 16:17:25.100651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:06.482 [2024-11-04 16:17:25.100661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:06.482 [2024-11-04 16:17:25.100671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:06.482 [2024-11-04 16:17:25.100682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:06.482 [2024-11-04 16:17:25.100692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:06.482 [2024-11-04 16:17:25.100703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:06.482 [2024-11-04 16:17:25.100721] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:06.482 [2024-11-04 16:17:25.100735] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f92d22d1-3db6-4ffd-ae00-b4e7f5d476c5 00:23:06.482 [2024-11-04 16:17:25.100753] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:06.482 [2024-11-04 16:17:25.100763] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:06.482 [2024-11-04 16:17:25.100773] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:06.482 [2024-11-04 16:17:25.100783] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:06.482 [2024-11-04 16:17:25.100792] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:06.482 [2024-11-04 16:17:25.100803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:06.482 [2024-11-04 16:17:25.100822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:06.482 [2024-11-04 16:17:25.100831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:06.482 [2024-11-04 16:17:25.100840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:06.482 [2024-11-04 16:17:25.100850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.482 [2024-11-04 16:17:25.100859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:06.482 [2024-11-04 16:17:25.100870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:23:06.482 [2024-11-04 16:17:25.100879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.482 [2024-11-04 16:17:25.119984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.482 [2024-11-04 16:17:25.120018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:06.482 [2024-11-04 16:17:25.120029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.085 ms 00:23:06.482 [2024-11-04 16:17:25.120044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.482 [2024-11-04 16:17:25.120548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.482 [2024-11-04 16:17:25.120567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:06.482 [2024-11-04 16:17:25.120578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:23:06.482 [2024-11-04 16:17:25.120594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.482 [2024-11-04 16:17:25.170704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.482 [2024-11-04 16:17:25.170745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:06.482 [2024-11-04 16:17:25.170766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.482 [2024-11-04 16:17:25.170777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.482 [2024-11-04 16:17:25.170832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.482 [2024-11-04 16:17:25.170842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:06.482 [2024-11-04 16:17:25.170853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.482 [2024-11-04 16:17:25.170868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.482 [2024-11-04 16:17:25.170938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.482 [2024-11-04 16:17:25.170952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:06.482 [2024-11-04 16:17:25.170963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.482 [2024-11-04 16:17:25.170973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.482 [2024-11-04 16:17:25.170989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.482 [2024-11-04 16:17:25.171000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:06.482 [2024-11-04 16:17:25.171010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.482 [2024-11-04 16:17:25.171020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.741 [2024-11-04 16:17:25.285142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.741 [2024-11-04 16:17:25.285192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:06.741 [2024-11-04 16:17:25.285205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.741 [2024-11-04 16:17:25.285215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.741 [2024-11-04 16:17:25.379089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.741 [2024-11-04 16:17:25.379139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:06.741 [2024-11-04 16:17:25.379152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.741 [2024-11-04 16:17:25.379162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.741 [2024-11-04 16:17:25.379247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.741 [2024-11-04 16:17:25.379259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:06.741 [2024-11-04 16:17:25.379270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.741 [2024-11-04 16:17:25.379279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.741 [2024-11-04 16:17:25.379314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.741 [2024-11-04 16:17:25.379325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:06.741 [2024-11-04 16:17:25.379334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.741 [2024-11-04 16:17:25.379344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.741 [2024-11-04 16:17:25.379444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.741 [2024-11-04 16:17:25.379457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:06.741 [2024-11-04 16:17:25.379467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.741 [2024-11-04 16:17:25.379493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.741 [2024-11-04 16:17:25.379527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.741 [2024-11-04 16:17:25.379538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:06.741 [2024-11-04 16:17:25.379549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.741 [2024-11-04 16:17:25.379558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.741 [2024-11-04 16:17:25.379595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.741 [2024-11-04 16:17:25.379611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:06.741 [2024-11-04 16:17:25.379621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.741 [2024-11-04 16:17:25.379631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.741 [2024-11-04 16:17:25.379671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.741 [2024-11-04 16:17:25.379683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:06.741 [2024-11-04 16:17:25.379693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.741 [2024-11-04 16:17:25.379719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.741 [2024-11-04 16:17:25.379858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 487.055 ms, result 0 00:23:07.679 00:23:07.679 00:23:07.679 16:17:26 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:09.582 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:09.582 16:17:28 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:09.582 [2024-11-04 16:17:28.161967] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:23:09.582 [2024-11-04 16:17:28.162079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77453 ] 00:23:09.861 [2024-11-04 16:17:28.340169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.861 [2024-11-04 16:17:28.446110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.137 [2024-11-04 16:17:28.787977] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:10.137 [2024-11-04 16:17:28.788044] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:10.397 [2024-11-04 16:17:28.947332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.397 [2024-11-04 16:17:28.947380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:10.397 [2024-11-04 16:17:28.947401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:10.397 [2024-11-04 16:17:28.947412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.397 [2024-11-04 16:17:28.947459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.397 [2024-11-04 16:17:28.947472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:10.397 [2024-11-04 16:17:28.947486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:10.397 [2024-11-04 16:17:28.947495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.397 [2024-11-04 16:17:28.947517] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:10.397 [2024-11-04 16:17:28.948424] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:10.397 [2024-11-04 16:17:28.948474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.397 [2024-11-04 16:17:28.948486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:10.397 [2024-11-04 16:17:28.948497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:23:10.397 [2024-11-04 16:17:28.948506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.397 [2024-11-04 16:17:28.950028] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:10.397 [2024-11-04 16:17:28.968901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.397 [2024-11-04 16:17:28.968939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:10.397 [2024-11-04 16:17:28.968954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.904 ms 00:23:10.397 [2024-11-04 16:17:28.968964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.397 [2024-11-04 16:17:28.969028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.397 [2024-11-04 16:17:28.969041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:10.397 [2024-11-04 16:17:28.969052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:10.397 [2024-11-04 16:17:28.969062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.397 [2024-11-04 16:17:28.975975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.397 [2024-11-04 16:17:28.976172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:10.397 [2024-11-04 16:17:28.976193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.853 ms 00:23:10.397 [2024-11-04 16:17:28.976204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.397 [2024-11-04 16:17:28.976291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.397 [2024-11-04 16:17:28.976305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:10.397 [2024-11-04 16:17:28.976317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:10.397 [2024-11-04 16:17:28.976328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.397 [2024-11-04 16:17:28.976370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.397 [2024-11-04 16:17:28.976383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:10.397 [2024-11-04 16:17:28.976394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:10.397 [2024-11-04 16:17:28.976404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.397 [2024-11-04 16:17:28.976428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:10.397 [2024-11-04 16:17:28.981238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.397 [2024-11-04 16:17:28.981267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:10.397 [2024-11-04 16:17:28.981279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.823 ms 00:23:10.397 [2024-11-04 16:17:28.981293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.397 [2024-11-04 16:17:28.981323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.397 [2024-11-04 16:17:28.981334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:10.397 [2024-11-04 16:17:28.981344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:10.397 [2024-11-04 16:17:28.981354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.397 [2024-11-04 16:17:28.981424] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:10.397 [2024-11-04 16:17:28.981497] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:10.397 [2024-11-04 16:17:28.981533] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:10.397 [2024-11-04 16:17:28.981554] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:10.397 [2024-11-04 16:17:28.981642] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:10.397 [2024-11-04 16:17:28.981658] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:10.397 [2024-11-04 16:17:28.981671] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:10.397 [2024-11-04 16:17:28.981685] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:10.397 [2024-11-04 16:17:28.981697] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:10.397 [2024-11-04 16:17:28.981708] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:10.397 [2024-11-04 16:17:28.981719] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:10.397 [2024-11-04 16:17:28.981729] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:10.397 [2024-11-04 16:17:28.981739] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:10.397 [2024-11-04 16:17:28.981782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.397 [2024-11-04 16:17:28.981793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:10.397 [2024-11-04 16:17:28.981804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:23:10.397 [2024-11-04 16:17:28.981814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.397 [2024-11-04 16:17:28.981890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.397 [2024-11-04 16:17:28.981903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:10.397 [2024-11-04 16:17:28.981914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:10.397 [2024-11-04 16:17:28.981923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.397 [2024-11-04 16:17:28.982016] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:10.397 [2024-11-04 16:17:28.982035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:10.397 [2024-11-04 16:17:28.982047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:10.397 [2024-11-04 16:17:28.982058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.397 [2024-11-04 16:17:28.982068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:10.397 [2024-11-04 16:17:28.982078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:10.397 [2024-11-04 16:17:28.982087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:10.397 [2024-11-04 16:17:28.982097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:10.397 [2024-11-04 16:17:28.982107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:10.397 [2024-11-04 16:17:28.982116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:10.397 [2024-11-04 16:17:28.982128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:10.397 [2024-11-04 16:17:28.982138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:10.397 [2024-11-04 16:17:28.982147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:10.397 [2024-11-04 16:17:28.982156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:10.397 [2024-11-04 16:17:28.982165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:10.397 [2024-11-04 16:17:28.982183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.397 [2024-11-04 16:17:28.982192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:10.397 [2024-11-04 16:17:28.982202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:10.398 [2024-11-04 16:17:28.982211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.398 [2024-11-04 16:17:28.982221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:10.398 [2024-11-04 16:17:28.982230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:10.398 [2024-11-04 16:17:28.982239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.398 [2024-11-04 16:17:28.982248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:10.398 [2024-11-04 16:17:28.982258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:10.398 [2024-11-04 16:17:28.982267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.398 [2024-11-04 16:17:28.982276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:10.398 [2024-11-04 16:17:28.982285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:10.398 [2024-11-04 16:17:28.982294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.398 [2024-11-04 16:17:28.982302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:10.398 [2024-11-04 16:17:28.982311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:10.398 [2024-11-04 16:17:28.982320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:10.398 [2024-11-04 16:17:28.982329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:10.398 [2024-11-04 16:17:28.982338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:10.398 [2024-11-04 16:17:28.982348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:10.398 [2024-11-04 16:17:28.982357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:10.398 [2024-11-04 16:17:28.982365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:10.398 [2024-11-04 16:17:28.982374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:10.398 [2024-11-04 16:17:28.982383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:10.398 [2024-11-04 16:17:28.982392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:10.398 [2024-11-04 16:17:28.982400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.398 [2024-11-04 16:17:28.982410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:10.398 [2024-11-04 16:17:28.982418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:10.398 [2024-11-04 16:17:28.982428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.398 [2024-11-04 16:17:28.982437] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:10.398 [2024-11-04 16:17:28.982447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:10.398 [2024-11-04 16:17:28.982456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:10.398 [2024-11-04 16:17:28.982466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:10.398 [2024-11-04 16:17:28.982476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:10.398 [2024-11-04 16:17:28.982486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:10.398 [2024-11-04 16:17:28.982495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:10.398 [2024-11-04 16:17:28.982504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:10.398 [2024-11-04 16:17:28.982513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:10.398 [2024-11-04 16:17:28.982522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:10.398 [2024-11-04 16:17:28.982533] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:10.398 [2024-11-04 16:17:28.982554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:10.398 [2024-11-04 16:17:28.982566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:10.398 [2024-11-04 16:17:28.982577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:10.398 [2024-11-04 16:17:28.982587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:10.398 [2024-11-04 16:17:28.982597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:10.398 [2024-11-04 16:17:28.982608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:10.398 [2024-11-04 16:17:28.982618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:10.398 [2024-11-04 16:17:28.982629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:10.398 [2024-11-04 16:17:28.982639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:10.398 [2024-11-04 16:17:28.982649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:10.398 [2024-11-04 16:17:28.982659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:10.398 [2024-11-04 16:17:28.982669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:10.398 [2024-11-04 16:17:28.982679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:10.398 [2024-11-04 16:17:28.982691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:10.398 [2024-11-04 16:17:28.982702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:10.398 [2024-11-04 16:17:28.982712] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:10.398 [2024-11-04 16:17:28.982727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:10.398 [2024-11-04 16:17:28.982738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:10.398 [2024-11-04 16:17:28.982759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:10.398 [2024-11-04 16:17:28.982771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:10.398 [2024-11-04 16:17:28.982782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:10.398 [2024-11-04 16:17:28.982792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.398 [2024-11-04 16:17:28.982803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:10.398 [2024-11-04 16:17:28.982814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:23:10.398 [2024-11-04 16:17:28.982824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.398 [2024-11-04 16:17:29.020592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.398 [2024-11-04 16:17:29.020629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:10.398 [2024-11-04 16:17:29.020643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.781 ms 00:23:10.398 [2024-11-04 16:17:29.020654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.398 [2024-11-04 16:17:29.020734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.398 [2024-11-04 16:17:29.020745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:10.398 [2024-11-04 16:17:29.020774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:10.398 [2024-11-04 16:17:29.020784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.398 [2024-11-04 16:17:29.074968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.398 [2024-11-04 16:17:29.075004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:10.398 [2024-11-04 16:17:29.075018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.215 ms 00:23:10.398 [2024-11-04 16:17:29.075029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.398 [2024-11-04 16:17:29.075065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.398 [2024-11-04 16:17:29.075076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:10.398 [2024-11-04 16:17:29.075087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:10.398 [2024-11-04 16:17:29.075101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.398 [2024-11-04 16:17:29.075580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.398 [2024-11-04 16:17:29.075594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:10.398 [2024-11-04 16:17:29.075605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:23:10.398 [2024-11-04 16:17:29.075615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.398 [2024-11-04 16:17:29.075729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.398 [2024-11-04 16:17:29.075743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:10.398 [2024-11-04 16:17:29.075774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:10.398 [2024-11-04 16:17:29.075791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.398 [2024-11-04 16:17:29.094139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.398 [2024-11-04 16:17:29.094172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:10.398 [2024-11-04 16:17:29.094189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.356 ms 00:23:10.398 [2024-11-04 16:17:29.094199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.398 [2024-11-04 16:17:29.111824] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:10.398 [2024-11-04 16:17:29.111861] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:10.398 [2024-11-04 16:17:29.111875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.398 [2024-11-04 16:17:29.111886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:10.398 [2024-11-04 16:17:29.111897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.611 ms 00:23:10.398 [2024-11-04 16:17:29.111906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.657 [2024-11-04 16:17:29.139379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.657 [2024-11-04 16:17:29.139425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:10.657 [2024-11-04 16:17:29.139438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.474 ms 00:23:10.657 [2024-11-04 16:17:29.139449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.657 [2024-11-04 16:17:29.156507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.657 [2024-11-04 16:17:29.156545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:10.657 [2024-11-04 16:17:29.156557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.044 ms 00:23:10.657 [2024-11-04 16:17:29.156567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.657 [2024-11-04 16:17:29.173080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.657 [2024-11-04 16:17:29.173248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:10.657 [2024-11-04 16:17:29.173270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.503 ms 00:23:10.657 [2024-11-04 16:17:29.173280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.657 [2024-11-04 16:17:29.174064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.657 [2024-11-04 16:17:29.174087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:10.657 [2024-11-04 16:17:29.174099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:23:10.657 [2024-11-04 16:17:29.174113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.657 [2024-11-04 16:17:29.254334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.657 [2024-11-04 16:17:29.254391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:10.658 [2024-11-04 16:17:29.254414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.329 ms 00:23:10.658 [2024-11-04 16:17:29.254424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.658 [2024-11-04 16:17:29.264478] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:10.658 [2024-11-04 16:17:29.266724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.658 [2024-11-04 16:17:29.266764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:10.658 [2024-11-04 16:17:29.266778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.275 ms 00:23:10.658 [2024-11-04 16:17:29.266789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.658 [2024-11-04 16:17:29.266864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.658 [2024-11-04 16:17:29.266878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:10.658 [2024-11-04 16:17:29.266891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:10.658 [2024-11-04 16:17:29.266905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.658 [2024-11-04 16:17:29.266978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.658 [2024-11-04 16:17:29.266991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:10.658 [2024-11-04 16:17:29.267001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:10.658 [2024-11-04 16:17:29.267011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.658 [2024-11-04 16:17:29.267031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.658 [2024-11-04 16:17:29.267043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:10.658 [2024-11-04 16:17:29.267053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:10.658 [2024-11-04 16:17:29.267064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.658 [2024-11-04 16:17:29.267097] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:10.658 [2024-11-04 16:17:29.267111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.658 [2024-11-04 16:17:29.267122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:10.658 [2024-11-04 16:17:29.267133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:10.658 [2024-11-04 16:17:29.267142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.658 [2024-11-04 16:17:29.303937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.658 [2024-11-04 16:17:29.303978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:10.658 [2024-11-04 16:17:29.303994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.835 ms 00:23:10.658 [2024-11-04 16:17:29.304010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.658 [2024-11-04 16:17:29.304086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.658 [2024-11-04 16:17:29.304101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:10.658 [2024-11-04 16:17:29.304112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:10.658 [2024-11-04 16:17:29.304123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.658 [2024-11-04 16:17:29.305216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 358.011 ms, result 0 00:23:11.595  [2024-11-04T16:17:31.694Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-04T16:17:32.631Z] Copying: 47/1024 [MB] (25 MBps) [2024-11-04T16:17:33.568Z] Copying: 71/1024 [MB] (23 MBps) [2024-11-04T16:17:34.504Z] Copying: 93/1024 [MB] (22 MBps) [2024-11-04T16:17:35.442Z] Copying: 115/1024 [MB] (21 MBps) [2024-11-04T16:17:36.379Z] Copying: 138/1024 [MB] (23 MBps) [2024-11-04T16:17:37.317Z] Copying: 162/1024 [MB] (23 MBps) [2024-11-04T16:17:38.696Z] Copying: 187/1024 [MB] (25 MBps) [2024-11-04T16:17:39.666Z] Copying: 211/1024 [MB] (24 MBps) [2024-11-04T16:17:40.606Z] Copying: 233/1024 [MB] (21 MBps) [2024-11-04T16:17:41.545Z] Copying: 254/1024 [MB] (20 MBps) [2024-11-04T16:17:42.481Z] Copying: 276/1024 [MB] (22 MBps) [2024-11-04T16:17:43.418Z] Copying: 300/1024 [MB] (23 MBps) [2024-11-04T16:17:44.354Z] Copying: 324/1024 [MB] (24 MBps) [2024-11-04T16:17:45.732Z] Copying: 347/1024 [MB] (22 MBps) [2024-11-04T16:17:46.300Z] Copying: 370/1024 [MB] (23 MBps) [2024-11-04T16:17:47.677Z] Copying: 395/1024 [MB] (24 MBps) [2024-11-04T16:17:48.613Z] Copying: 420/1024 [MB] (24 MBps) [2024-11-04T16:17:49.550Z] Copying: 445/1024 [MB] (24 MBps) [2024-11-04T16:17:50.490Z] Copying: 469/1024 [MB] (24 MBps) [2024-11-04T16:17:51.433Z] Copying: 494/1024 [MB] (24 MBps) [2024-11-04T16:17:52.368Z] Copying: 519/1024 [MB] (25 MBps) [2024-11-04T16:17:53.306Z] Copying: 545/1024 [MB] (25 MBps) [2024-11-04T16:17:54.684Z] Copying: 569/1024 [MB] (24 MBps) [2024-11-04T16:17:55.622Z] Copying: 595/1024 [MB] (25 MBps) [2024-11-04T16:17:56.560Z] Copying: 621/1024 [MB] (25 MBps) [2024-11-04T16:17:57.496Z] Copying: 647/1024 [MB] (26 MBps) [2024-11-04T16:17:58.434Z] Copying: 673/1024 [MB] (25 MBps) [2024-11-04T16:17:59.371Z] Copying: 697/1024 [MB] (24 MBps) [2024-11-04T16:18:00.308Z] Copying: 723/1024 [MB] (25 MBps) [2024-11-04T16:18:01.686Z] Copying: 748/1024 [MB] (25 MBps) [2024-11-04T16:18:02.625Z] Copying: 775/1024 [MB] (26 MBps) [2024-11-04T16:18:03.591Z] Copying: 801/1024 [MB] (26 MBps) [2024-11-04T16:18:04.528Z] Copying: 827/1024 [MB] (26 MBps) [2024-11-04T16:18:05.465Z] Copying: 853/1024 [MB] (25 MBps) [2024-11-04T16:18:06.403Z] Copying: 878/1024 [MB] (25 MBps) [2024-11-04T16:18:07.339Z] Copying: 903/1024 [MB] (24 MBps) [2024-11-04T16:18:08.276Z] Copying: 929/1024 [MB] (25 MBps) [2024-11-04T16:18:09.654Z] Copying: 955/1024 [MB] (25 MBps) [2024-11-04T16:18:10.591Z] Copying: 980/1024 [MB] (25 MBps) [2024-11-04T16:18:11.528Z] Copying: 1005/1024 [MB] (25 MBps) [2024-11-04T16:18:11.788Z] Copying: 1023/1024 [MB] (17 MBps) [2024-11-04T16:18:11.788Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-04 16:18:11.637179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.066 [2024-11-04 16:18:11.637243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:53.066 [2024-11-04 16:18:11.637260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:53.066 [2024-11-04 16:18:11.637279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.066 [2024-11-04 16:18:11.639788] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:53.066 [2024-11-04 16:18:11.645780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.066 [2024-11-04 16:18:11.645958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:53.066 [2024-11-04 16:18:11.645981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.953 ms 00:23:53.066 [2024-11-04 16:18:11.645992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.066 [2024-11-04 16:18:11.658141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.066 [2024-11-04 16:18:11.658181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:53.066 [2024-11-04 16:18:11.658194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.655 ms 00:23:53.066 [2024-11-04 16:18:11.658205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.066 [2024-11-04 16:18:11.681711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.066 [2024-11-04 16:18:11.681778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:53.066 [2024-11-04 16:18:11.681792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.520 ms 00:23:53.066 [2024-11-04 16:18:11.681804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.066 [2024-11-04 16:18:11.686547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.066 [2024-11-04 16:18:11.686586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:53.066 [2024-11-04 16:18:11.686597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.717 ms 00:23:53.066 [2024-11-04 16:18:11.686607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.066 [2024-11-04 16:18:11.721166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.066 [2024-11-04 16:18:11.721203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:53.066 [2024-11-04 16:18:11.721215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.553 ms 00:23:53.066 [2024-11-04 16:18:11.721225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.066 [2024-11-04 16:18:11.740877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.066 [2024-11-04 16:18:11.740917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:53.066 [2024-11-04 16:18:11.740930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.649 ms 00:23:53.066 [2024-11-04 16:18:11.740955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.326 [2024-11-04 16:18:11.854220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.326 [2024-11-04 16:18:11.854276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:53.326 [2024-11-04 16:18:11.854290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.408 ms 00:23:53.326 [2024-11-04 16:18:11.854301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.326 [2024-11-04 16:18:11.889089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.326 [2024-11-04 16:18:11.889227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:53.326 [2024-11-04 16:18:11.889246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.827 ms 00:23:53.326 [2024-11-04 16:18:11.889272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.327 [2024-11-04 16:18:11.922918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.327 [2024-11-04 16:18:11.923062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:53.327 [2024-11-04 16:18:11.923080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.646 ms 00:23:53.327 [2024-11-04 16:18:11.923106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.327 [2024-11-04 16:18:11.956731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.327 [2024-11-04 16:18:11.956769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:53.327 [2024-11-04 16:18:11.956782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.624 ms 00:23:53.327 [2024-11-04 16:18:11.956791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.327 [2024-11-04 16:18:11.990427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.327 [2024-11-04 16:18:11.990461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:53.327 [2024-11-04 16:18:11.990473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.621 ms 00:23:53.327 [2024-11-04 16:18:11.990498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.327 [2024-11-04 16:18:11.990534] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:53.327 [2024-11-04 16:18:11.990567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100864 / 261120 wr_cnt: 1 state: open 00:23:53.327 [2024-11-04 16:18:11.990585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.990998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:53.327 [2024-11-04 16:18:11.991422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:53.328 [2024-11-04 16:18:11.991657] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:53.328 [2024-11-04 16:18:11.991667] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f92d22d1-3db6-4ffd-ae00-b4e7f5d476c5 00:23:53.328 [2024-11-04 16:18:11.991677] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100864 00:23:53.328 [2024-11-04 16:18:11.991687] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101824 00:23:53.328 [2024-11-04 16:18:11.991697] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100864 00:23:53.328 [2024-11-04 16:18:11.991707] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0095 00:23:53.328 [2024-11-04 16:18:11.991717] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:53.328 [2024-11-04 16:18:11.991732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:53.328 [2024-11-04 16:18:11.991751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:53.328 [2024-11-04 16:18:11.991760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:53.328 [2024-11-04 16:18:11.991778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:53.328 [2024-11-04 16:18:11.991788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.328 [2024-11-04 16:18:11.991798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:53.328 [2024-11-04 16:18:11.991809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:23:53.328 [2024-11-04 16:18:11.991818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.328 [2024-11-04 16:18:12.010916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.328 [2024-11-04 16:18:12.010951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:53.328 [2024-11-04 16:18:12.010963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.095 ms 00:23:53.328 [2024-11-04 16:18:12.010979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.328 [2024-11-04 16:18:12.011522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.328 [2024-11-04 16:18:12.011539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:53.328 [2024-11-04 16:18:12.011549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:23:53.328 [2024-11-04 16:18:12.011575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.587 [2024-11-04 16:18:12.062579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.587 [2024-11-04 16:18:12.062630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.587 [2024-11-04 16:18:12.062647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.587 [2024-11-04 16:18:12.062657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.587 [2024-11-04 16:18:12.062707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.587 [2024-11-04 16:18:12.062717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.587 [2024-11-04 16:18:12.062727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.587 [2024-11-04 16:18:12.062737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.587 [2024-11-04 16:18:12.062829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.587 [2024-11-04 16:18:12.062842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.587 [2024-11-04 16:18:12.062852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.587 [2024-11-04 16:18:12.062866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.587 [2024-11-04 16:18:12.062882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.587 [2024-11-04 16:18:12.062892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.587 [2024-11-04 16:18:12.062901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.587 [2024-11-04 16:18:12.062911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.587 [2024-11-04 16:18:12.181056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.587 [2024-11-04 16:18:12.181116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.587 [2024-11-04 16:18:12.181135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.587 [2024-11-04 16:18:12.181162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.587 [2024-11-04 16:18:12.277312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.587 [2024-11-04 16:18:12.277491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.587 [2024-11-04 16:18:12.277624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.587 [2024-11-04 16:18:12.277661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.587 [2024-11-04 16:18:12.277786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.587 [2024-11-04 16:18:12.277877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.587 [2024-11-04 16:18:12.277914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.587 [2024-11-04 16:18:12.277943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.587 [2024-11-04 16:18:12.278051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.587 [2024-11-04 16:18:12.278087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.587 [2024-11-04 16:18:12.278118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.587 [2024-11-04 16:18:12.278190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.587 [2024-11-04 16:18:12.278330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.587 [2024-11-04 16:18:12.278367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.587 [2024-11-04 16:18:12.278398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.587 [2024-11-04 16:18:12.278466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.587 [2024-11-04 16:18:12.278546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.587 [2024-11-04 16:18:12.278572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:53.587 [2024-11-04 16:18:12.278583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.587 [2024-11-04 16:18:12.278592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.587 [2024-11-04 16:18:12.278631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.587 [2024-11-04 16:18:12.278642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.587 [2024-11-04 16:18:12.278652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.587 [2024-11-04 16:18:12.278662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.588 [2024-11-04 16:18:12.278720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.588 [2024-11-04 16:18:12.278735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.588 [2024-11-04 16:18:12.278764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.588 [2024-11-04 16:18:12.278775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.588 [2024-11-04 16:18:12.278922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 644.392 ms, result 0 00:23:55.493 00:23:55.493 00:23:55.493 16:18:13 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:55.493 [2024-11-04 16:18:14.011341] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:23:55.493 [2024-11-04 16:18:14.011469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77916 ] 00:23:55.493 [2024-11-04 16:18:14.187122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.750 [2024-11-04 16:18:14.292550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.008 [2024-11-04 16:18:14.622336] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:56.008 [2024-11-04 16:18:14.622401] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:56.270 [2024-11-04 16:18:14.782857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.270 [2024-11-04 16:18:14.782902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:56.270 [2024-11-04 16:18:14.782922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:56.270 [2024-11-04 16:18:14.782948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.270 [2024-11-04 16:18:14.782995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.270 [2024-11-04 16:18:14.783007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:56.270 [2024-11-04 16:18:14.783020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:56.270 [2024-11-04 16:18:14.783030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.270 [2024-11-04 16:18:14.783051] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:56.270 [2024-11-04 16:18:14.784201] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:56.270 [2024-11-04 16:18:14.784290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.270 [2024-11-04 16:18:14.784387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:56.270 [2024-11-04 16:18:14.784424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:23:56.270 [2024-11-04 16:18:14.784455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.270 [2024-11-04 16:18:14.785982] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:56.270 [2024-11-04 16:18:14.803818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.270 [2024-11-04 16:18:14.803951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:56.270 [2024-11-04 16:18:14.804094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.865 ms 00:23:56.270 [2024-11-04 16:18:14.804133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.270 [2024-11-04 16:18:14.804215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.270 [2024-11-04 16:18:14.804255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:56.270 [2024-11-04 16:18:14.804287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:56.270 [2024-11-04 16:18:14.804367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.270 [2024-11-04 16:18:14.811270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.270 [2024-11-04 16:18:14.811404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:56.270 [2024-11-04 16:18:14.811423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.811 ms 00:23:56.270 [2024-11-04 16:18:14.811434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.270 [2024-11-04 16:18:14.811535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.270 [2024-11-04 16:18:14.811547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:56.270 [2024-11-04 16:18:14.811558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:56.270 [2024-11-04 16:18:14.811568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.270 [2024-11-04 16:18:14.811609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.270 [2024-11-04 16:18:14.811621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:56.270 [2024-11-04 16:18:14.811632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:56.270 [2024-11-04 16:18:14.811642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.270 [2024-11-04 16:18:14.811665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:56.270 [2024-11-04 16:18:14.816464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.270 [2024-11-04 16:18:14.816494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:56.270 [2024-11-04 16:18:14.816505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.811 ms 00:23:56.270 [2024-11-04 16:18:14.816534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.270 [2024-11-04 16:18:14.816564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.270 [2024-11-04 16:18:14.816575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:56.270 [2024-11-04 16:18:14.816585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:56.270 [2024-11-04 16:18:14.816595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.270 [2024-11-04 16:18:14.816646] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:56.270 [2024-11-04 16:18:14.816685] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:56.270 [2024-11-04 16:18:14.816729] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:56.270 [2024-11-04 16:18:14.816768] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:56.270 [2024-11-04 16:18:14.816856] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:56.270 [2024-11-04 16:18:14.816869] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:56.270 [2024-11-04 16:18:14.816882] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:56.270 [2024-11-04 16:18:14.816896] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:56.270 [2024-11-04 16:18:14.816907] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:56.270 [2024-11-04 16:18:14.816934] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:56.270 [2024-11-04 16:18:14.816955] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:56.270 [2024-11-04 16:18:14.816964] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:56.270 [2024-11-04 16:18:14.816974] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:56.270 [2024-11-04 16:18:14.816988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.270 [2024-11-04 16:18:14.816998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:56.270 [2024-11-04 16:18:14.817008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:23:56.270 [2024-11-04 16:18:14.817018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.270 [2024-11-04 16:18:14.817089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.270 [2024-11-04 16:18:14.817100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:56.270 [2024-11-04 16:18:14.817110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:56.270 [2024-11-04 16:18:14.817119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.270 [2024-11-04 16:18:14.817209] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:56.270 [2024-11-04 16:18:14.817226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:56.270 [2024-11-04 16:18:14.817236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:56.270 [2024-11-04 16:18:14.817246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.270 [2024-11-04 16:18:14.817256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:56.270 [2024-11-04 16:18:14.817265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:56.270 [2024-11-04 16:18:14.817290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:56.270 [2024-11-04 16:18:14.817300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:56.270 [2024-11-04 16:18:14.817309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:56.270 [2024-11-04 16:18:14.817318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:56.270 [2024-11-04 16:18:14.817327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:56.270 [2024-11-04 16:18:14.817337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:56.270 [2024-11-04 16:18:14.817346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:56.270 [2024-11-04 16:18:14.817356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:56.270 [2024-11-04 16:18:14.817365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:56.270 [2024-11-04 16:18:14.817384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.270 [2024-11-04 16:18:14.817393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:56.270 [2024-11-04 16:18:14.817403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:56.270 [2024-11-04 16:18:14.817412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.270 [2024-11-04 16:18:14.817421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:56.270 [2024-11-04 16:18:14.817431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:56.270 [2024-11-04 16:18:14.817440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.270 [2024-11-04 16:18:14.817449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:56.270 [2024-11-04 16:18:14.817459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:56.270 [2024-11-04 16:18:14.817468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.270 [2024-11-04 16:18:14.817477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:56.270 [2024-11-04 16:18:14.817486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:56.270 [2024-11-04 16:18:14.817495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.270 [2024-11-04 16:18:14.817504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:56.270 [2024-11-04 16:18:14.817514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:56.270 [2024-11-04 16:18:14.817523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.271 [2024-11-04 16:18:14.817532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:56.271 [2024-11-04 16:18:14.817541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:56.271 [2024-11-04 16:18:14.817550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:56.271 [2024-11-04 16:18:14.817558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:56.271 [2024-11-04 16:18:14.817568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:56.271 [2024-11-04 16:18:14.817577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:56.271 [2024-11-04 16:18:14.817586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:56.271 [2024-11-04 16:18:14.817594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:56.271 [2024-11-04 16:18:14.817603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.271 [2024-11-04 16:18:14.817612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:56.271 [2024-11-04 16:18:14.817622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:56.271 [2024-11-04 16:18:14.817630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.271 [2024-11-04 16:18:14.817640] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:56.271 [2024-11-04 16:18:14.817650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:56.271 [2024-11-04 16:18:14.817660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:56.271 [2024-11-04 16:18:14.817670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.271 [2024-11-04 16:18:14.817680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:56.271 [2024-11-04 16:18:14.817689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:56.271 [2024-11-04 16:18:14.817699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:56.271 [2024-11-04 16:18:14.817709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:56.271 [2024-11-04 16:18:14.817718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:56.271 [2024-11-04 16:18:14.817727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:56.271 [2024-11-04 16:18:14.817738] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:56.271 [2024-11-04 16:18:14.817750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:56.271 [2024-11-04 16:18:14.817761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:56.271 [2024-11-04 16:18:14.817772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:56.271 [2024-11-04 16:18:14.817782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:56.271 [2024-11-04 16:18:14.817805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:56.271 [2024-11-04 16:18:14.817816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:56.271 [2024-11-04 16:18:14.817826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:56.271 [2024-11-04 16:18:14.817836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:56.271 [2024-11-04 16:18:14.817846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:56.271 [2024-11-04 16:18:14.817857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:56.271 [2024-11-04 16:18:14.817867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:56.271 [2024-11-04 16:18:14.817877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:56.271 [2024-11-04 16:18:14.817887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:56.271 [2024-11-04 16:18:14.817897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:56.271 [2024-11-04 16:18:14.817908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:56.271 [2024-11-04 16:18:14.817918] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:56.271 [2024-11-04 16:18:14.817933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:56.271 [2024-11-04 16:18:14.817944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:56.271 [2024-11-04 16:18:14.817954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:56.271 [2024-11-04 16:18:14.817964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:56.271 [2024-11-04 16:18:14.817974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:56.271 [2024-11-04 16:18:14.817987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.271 [2024-11-04 16:18:14.817997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:56.271 [2024-11-04 16:18:14.818007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:23:56.271 [2024-11-04 16:18:14.818017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.271 [2024-11-04 16:18:14.857066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.271 [2024-11-04 16:18:14.857100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:56.271 [2024-11-04 16:18:14.857113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.067 ms 00:23:56.271 [2024-11-04 16:18:14.857139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.271 [2024-11-04 16:18:14.857215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.271 [2024-11-04 16:18:14.857226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:56.271 [2024-11-04 16:18:14.857236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:56.271 [2024-11-04 16:18:14.857246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.271 [2024-11-04 16:18:14.931656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.271 [2024-11-04 16:18:14.931694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:56.271 [2024-11-04 16:18:14.931708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.478 ms 00:23:56.271 [2024-11-04 16:18:14.931734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.271 [2024-11-04 16:18:14.931783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.271 [2024-11-04 16:18:14.931795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:56.271 [2024-11-04 16:18:14.931807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:56.271 [2024-11-04 16:18:14.931821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.271 [2024-11-04 16:18:14.932320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.271 [2024-11-04 16:18:14.932341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:56.271 [2024-11-04 16:18:14.932352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:23:56.271 [2024-11-04 16:18:14.932362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.271 [2024-11-04 16:18:14.932478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.271 [2024-11-04 16:18:14.932491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:56.271 [2024-11-04 16:18:14.932502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:56.271 [2024-11-04 16:18:14.932518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.271 [2024-11-04 16:18:14.950917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.271 [2024-11-04 16:18:14.950952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:56.271 [2024-11-04 16:18:14.950968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.408 ms 00:23:56.271 [2024-11-04 16:18:14.950995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.271 [2024-11-04 16:18:14.969773] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:56.271 [2024-11-04 16:18:14.969812] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:56.271 [2024-11-04 16:18:14.969827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.271 [2024-11-04 16:18:14.969855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:56.271 [2024-11-04 16:18:14.969866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.768 ms 00:23:56.271 [2024-11-04 16:18:14.969876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.531 [2024-11-04 16:18:14.999658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.531 [2024-11-04 16:18:14.999705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:56.531 [2024-11-04 16:18:14.999718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.787 ms 00:23:56.531 [2024-11-04 16:18:14.999745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.531 [2024-11-04 16:18:15.017307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.531 [2024-11-04 16:18:15.017352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:56.531 [2024-11-04 16:18:15.017364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.535 ms 00:23:56.531 [2024-11-04 16:18:15.017373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.531 [2024-11-04 16:18:15.035017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.531 [2024-11-04 16:18:15.035051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:56.531 [2024-11-04 16:18:15.035063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.637 ms 00:23:56.531 [2024-11-04 16:18:15.035088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.531 [2024-11-04 16:18:15.035904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.531 [2024-11-04 16:18:15.035938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:56.531 [2024-11-04 16:18:15.035950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:23:56.531 [2024-11-04 16:18:15.035963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.531 [2024-11-04 16:18:15.117990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.531 [2024-11-04 16:18:15.118048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:56.531 [2024-11-04 16:18:15.118070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.137 ms 00:23:56.531 [2024-11-04 16:18:15.118080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.531 [2024-11-04 16:18:15.128298] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:56.531 [2024-11-04 16:18:15.130521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.531 [2024-11-04 16:18:15.130677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:56.531 [2024-11-04 16:18:15.130698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.417 ms 00:23:56.531 [2024-11-04 16:18:15.130709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.531 [2024-11-04 16:18:15.130801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.531 [2024-11-04 16:18:15.130815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:56.531 [2024-11-04 16:18:15.130826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:56.531 [2024-11-04 16:18:15.130840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.531 [2024-11-04 16:18:15.132288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.531 [2024-11-04 16:18:15.132325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:56.531 [2024-11-04 16:18:15.132337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.406 ms 00:23:56.531 [2024-11-04 16:18:15.132347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.531 [2024-11-04 16:18:15.132374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.531 [2024-11-04 16:18:15.132385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:56.531 [2024-11-04 16:18:15.132396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:56.531 [2024-11-04 16:18:15.132406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.531 [2024-11-04 16:18:15.132444] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:56.531 [2024-11-04 16:18:15.132459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.531 [2024-11-04 16:18:15.132470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:56.531 [2024-11-04 16:18:15.132480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:56.531 [2024-11-04 16:18:15.132490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.531 [2024-11-04 16:18:15.167174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.531 [2024-11-04 16:18:15.167210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:56.531 [2024-11-04 16:18:15.167224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.721 ms 00:23:56.531 [2024-11-04 16:18:15.167239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.531 [2024-11-04 16:18:15.167308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.531 [2024-11-04 16:18:15.167320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:56.531 [2024-11-04 16:18:15.167330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:56.531 [2024-11-04 16:18:15.167339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.531 [2024-11-04 16:18:15.168462] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 385.799 ms, result 0 00:23:57.907  [2024-11-04T16:18:17.566Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-04T16:18:18.502Z] Copying: 46/1024 [MB] (25 MBps) [2024-11-04T16:18:19.437Z] Copying: 73/1024 [MB] (26 MBps) [2024-11-04T16:18:20.372Z] Copying: 99/1024 [MB] (25 MBps) [2024-11-04T16:18:21.748Z] Copying: 125/1024 [MB] (26 MBps) [2024-11-04T16:18:22.685Z] Copying: 152/1024 [MB] (26 MBps) [2024-11-04T16:18:23.621Z] Copying: 179/1024 [MB] (26 MBps) [2024-11-04T16:18:24.557Z] Copying: 206/1024 [MB] (27 MBps) [2024-11-04T16:18:25.504Z] Copying: 233/1024 [MB] (26 MBps) [2024-11-04T16:18:26.440Z] Copying: 260/1024 [MB] (27 MBps) [2024-11-04T16:18:27.377Z] Copying: 287/1024 [MB] (26 MBps) [2024-11-04T16:18:28.754Z] Copying: 313/1024 [MB] (26 MBps) [2024-11-04T16:18:29.689Z] Copying: 340/1024 [MB] (26 MBps) [2024-11-04T16:18:30.626Z] Copying: 367/1024 [MB] (26 MBps) [2024-11-04T16:18:31.562Z] Copying: 394/1024 [MB] (27 MBps) [2024-11-04T16:18:32.498Z] Copying: 422/1024 [MB] (27 MBps) [2024-11-04T16:18:33.434Z] Copying: 449/1024 [MB] (27 MBps) [2024-11-04T16:18:34.370Z] Copying: 476/1024 [MB] (26 MBps) [2024-11-04T16:18:35.746Z] Copying: 501/1024 [MB] (25 MBps) [2024-11-04T16:18:36.682Z] Copying: 527/1024 [MB] (25 MBps) [2024-11-04T16:18:37.621Z] Copying: 553/1024 [MB] (25 MBps) [2024-11-04T16:18:38.558Z] Copying: 578/1024 [MB] (25 MBps) [2024-11-04T16:18:39.494Z] Copying: 604/1024 [MB] (25 MBps) [2024-11-04T16:18:40.428Z] Copying: 630/1024 [MB] (26 MBps) [2024-11-04T16:18:41.365Z] Copying: 657/1024 [MB] (26 MBps) [2024-11-04T16:18:42.742Z] Copying: 683/1024 [MB] (26 MBps) [2024-11-04T16:18:43.677Z] Copying: 709/1024 [MB] (26 MBps) [2024-11-04T16:18:44.613Z] Copying: 736/1024 [MB] (26 MBps) [2024-11-04T16:18:45.549Z] Copying: 762/1024 [MB] (25 MBps) [2024-11-04T16:18:46.485Z] Copying: 788/1024 [MB] (25 MBps) [2024-11-04T16:18:47.422Z] Copying: 814/1024 [MB] (25 MBps) [2024-11-04T16:18:48.390Z] Copying: 839/1024 [MB] (25 MBps) [2024-11-04T16:18:49.327Z] Copying: 866/1024 [MB] (26 MBps) [2024-11-04T16:18:50.703Z] Copying: 892/1024 [MB] (26 MBps) [2024-11-04T16:18:51.638Z] Copying: 918/1024 [MB] (25 MBps) [2024-11-04T16:18:52.575Z] Copying: 944/1024 [MB] (26 MBps) [2024-11-04T16:18:53.513Z] Copying: 970/1024 [MB] (25 MBps) [2024-11-04T16:18:54.449Z] Copying: 995/1024 [MB] (25 MBps) [2024-11-04T16:18:54.708Z] Copying: 1020/1024 [MB] (25 MBps) [2024-11-04T16:18:54.708Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-04 16:18:54.536350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.986 [2024-11-04 16:18:54.536449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:35.986 [2024-11-04 16:18:54.536488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:35.986 [2024-11-04 16:18:54.536508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.986 [2024-11-04 16:18:54.536555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:35.986 [2024-11-04 16:18:54.545285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.986 [2024-11-04 16:18:54.545349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:35.986 [2024-11-04 16:18:54.545375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.712 ms 00:24:35.986 [2024-11-04 16:18:54.545396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.986 [2024-11-04 16:18:54.545816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.986 [2024-11-04 16:18:54.545845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:35.986 [2024-11-04 16:18:54.545868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:24:35.986 [2024-11-04 16:18:54.545889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.986 [2024-11-04 16:18:54.554209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.986 [2024-11-04 16:18:54.554257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:35.986 [2024-11-04 16:18:54.554275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.293 ms 00:24:35.986 [2024-11-04 16:18:54.554289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.986 [2024-11-04 16:18:54.561348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.986 [2024-11-04 16:18:54.561389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:35.986 [2024-11-04 16:18:54.561421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.026 ms 00:24:35.986 [2024-11-04 16:18:54.561437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.986 [2024-11-04 16:18:54.596120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.986 [2024-11-04 16:18:54.596157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:35.986 [2024-11-04 16:18:54.596170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.684 ms 00:24:35.986 [2024-11-04 16:18:54.596179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.986 [2024-11-04 16:18:54.616900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.986 [2024-11-04 16:18:54.616956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:35.986 [2024-11-04 16:18:54.616970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.720 ms 00:24:35.986 [2024-11-04 16:18:54.616980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.246 [2024-11-04 16:18:54.763118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.246 [2024-11-04 16:18:54.763167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:36.246 [2024-11-04 16:18:54.763183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 146.334 ms 00:24:36.246 [2024-11-04 16:18:54.763194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.246 [2024-11-04 16:18:54.798034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.246 [2024-11-04 16:18:54.798067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:36.246 [2024-11-04 16:18:54.798080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.879 ms 00:24:36.246 [2024-11-04 16:18:54.798089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.246 [2024-11-04 16:18:54.832113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.246 [2024-11-04 16:18:54.832147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:36.246 [2024-11-04 16:18:54.832170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.044 ms 00:24:36.246 [2024-11-04 16:18:54.832179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.246 [2024-11-04 16:18:54.865423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.246 [2024-11-04 16:18:54.865579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:36.246 [2024-11-04 16:18:54.865599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.263 ms 00:24:36.246 [2024-11-04 16:18:54.865609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.246 [2024-11-04 16:18:54.899406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.246 [2024-11-04 16:18:54.899585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:36.246 [2024-11-04 16:18:54.899606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.762 ms 00:24:36.246 [2024-11-04 16:18:54.899616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.246 [2024-11-04 16:18:54.899649] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:36.246 [2024-11-04 16:18:54.899666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:36.246 [2024-11-04 16:18:54.899680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:36.246 [2024-11-04 16:18:54.899691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:36.246 [2024-11-04 16:18:54.899702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:36.246 [2024-11-04 16:18:54.899714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.899993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:36.247 [2024-11-04 16:18:54.900704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:36.248 [2024-11-04 16:18:54.900714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:36.248 [2024-11-04 16:18:54.900725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:36.248 [2024-11-04 16:18:54.900735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:36.248 [2024-11-04 16:18:54.900763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:36.248 [2024-11-04 16:18:54.900781] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:36.248 [2024-11-04 16:18:54.900791] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f92d22d1-3db6-4ffd-ae00-b4e7f5d476c5 00:24:36.248 [2024-11-04 16:18:54.900802] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:36.248 [2024-11-04 16:18:54.900811] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 31168 00:24:36.248 [2024-11-04 16:18:54.900821] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 30208 00:24:36.248 [2024-11-04 16:18:54.900831] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0318 00:24:36.248 [2024-11-04 16:18:54.900841] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:36.248 [2024-11-04 16:18:54.900856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:36.248 [2024-11-04 16:18:54.900865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:36.248 [2024-11-04 16:18:54.900883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:36.248 [2024-11-04 16:18:54.900893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:36.248 [2024-11-04 16:18:54.900903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.248 [2024-11-04 16:18:54.900914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:36.248 [2024-11-04 16:18:54.900923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:24:36.248 [2024-11-04 16:18:54.900933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.248 [2024-11-04 16:18:54.919280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.248 [2024-11-04 16:18:54.919311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:36.248 [2024-11-04 16:18:54.919323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.345 ms 00:24:36.248 [2024-11-04 16:18:54.919337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.248 [2024-11-04 16:18:54.919909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.248 [2024-11-04 16:18:54.919923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:36.248 [2024-11-04 16:18:54.919933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:24:36.248 [2024-11-04 16:18:54.919943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.507 [2024-11-04 16:18:54.967560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.507 [2024-11-04 16:18:54.967594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:36.507 [2024-11-04 16:18:54.967611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.507 [2024-11-04 16:18:54.967621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.507 [2024-11-04 16:18:54.967667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.507 [2024-11-04 16:18:54.967678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:36.507 [2024-11-04 16:18:54.967687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.507 [2024-11-04 16:18:54.967697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.507 [2024-11-04 16:18:54.967802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.507 [2024-11-04 16:18:54.967818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:36.507 [2024-11-04 16:18:54.967829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.507 [2024-11-04 16:18:54.967844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.507 [2024-11-04 16:18:54.967860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.507 [2024-11-04 16:18:54.967870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:36.507 [2024-11-04 16:18:54.967881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.507 [2024-11-04 16:18:54.967890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.507 [2024-11-04 16:18:55.085592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.507 [2024-11-04 16:18:55.085641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:36.507 [2024-11-04 16:18:55.085662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.507 [2024-11-04 16:18:55.085672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.507 [2024-11-04 16:18:55.181272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.507 [2024-11-04 16:18:55.181318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:36.507 [2024-11-04 16:18:55.181334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.507 [2024-11-04 16:18:55.181344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.507 [2024-11-04 16:18:55.181429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.507 [2024-11-04 16:18:55.181441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:36.507 [2024-11-04 16:18:55.181452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.507 [2024-11-04 16:18:55.181463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.507 [2024-11-04 16:18:55.181504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.507 [2024-11-04 16:18:55.181515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:36.507 [2024-11-04 16:18:55.181526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.507 [2024-11-04 16:18:55.181536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.507 [2024-11-04 16:18:55.181662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.507 [2024-11-04 16:18:55.181677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:36.507 [2024-11-04 16:18:55.181688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.507 [2024-11-04 16:18:55.181699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.507 [2024-11-04 16:18:55.181741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.507 [2024-11-04 16:18:55.181776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:36.507 [2024-11-04 16:18:55.181788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.507 [2024-11-04 16:18:55.181798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.507 [2024-11-04 16:18:55.181836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.507 [2024-11-04 16:18:55.181847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:36.507 [2024-11-04 16:18:55.181858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.507 [2024-11-04 16:18:55.181868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.507 [2024-11-04 16:18:55.181910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.507 [2024-11-04 16:18:55.181922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:36.507 [2024-11-04 16:18:55.181932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.507 [2024-11-04 16:18:55.181942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.507 [2024-11-04 16:18:55.182056] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 646.737 ms, result 0 00:24:37.443 00:24:37.443 00:24:37.702 16:18:56 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:39.605 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:39.605 16:18:57 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:39.605 16:18:57 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:39.605 16:18:57 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:39.605 16:18:58 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:39.605 16:18:58 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:39.605 16:18:58 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76259 00:24:39.605 16:18:58 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 76259 ']' 00:24:39.605 16:18:58 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 76259 00:24:39.605 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (76259) - No such process 00:24:39.605 Process with pid 76259 is not found 00:24:39.605 16:18:58 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 76259 is not found' 00:24:39.605 16:18:58 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:39.605 Remove shared memory files 00:24:39.605 16:18:58 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:39.605 16:18:58 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:39.606 16:18:58 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:39.606 16:18:58 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:39.606 16:18:58 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:39.606 16:18:58 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:39.606 ************************************ 00:24:39.606 END TEST ftl_restore 00:24:39.606 ************************************ 00:24:39.606 00:24:39.606 real 3m23.166s 00:24:39.606 user 3m10.177s 00:24:39.606 sys 0m14.231s 00:24:39.606 16:18:58 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:39.606 16:18:58 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:39.606 16:18:58 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:39.606 16:18:58 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:24:39.606 16:18:58 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:39.606 16:18:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:39.606 ************************************ 00:24:39.606 START TEST ftl_dirty_shutdown 00:24:39.606 ************************************ 00:24:39.606 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:39.606 * Looking for test storage... 00:24:39.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:39.606 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:39.606 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:24:39.606 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:39.865 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:39.865 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.865 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.865 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.865 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.865 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.865 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.865 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.865 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.865 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.865 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.865 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.865 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:39.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.866 --rc genhtml_branch_coverage=1 00:24:39.866 --rc genhtml_function_coverage=1 00:24:39.866 --rc genhtml_legend=1 00:24:39.866 --rc geninfo_all_blocks=1 00:24:39.866 --rc geninfo_unexecuted_blocks=1 00:24:39.866 00:24:39.866 ' 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:39.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.866 --rc genhtml_branch_coverage=1 00:24:39.866 --rc genhtml_function_coverage=1 00:24:39.866 --rc genhtml_legend=1 00:24:39.866 --rc geninfo_all_blocks=1 00:24:39.866 --rc geninfo_unexecuted_blocks=1 00:24:39.866 00:24:39.866 ' 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:39.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.866 --rc genhtml_branch_coverage=1 00:24:39.866 --rc genhtml_function_coverage=1 00:24:39.866 --rc genhtml_legend=1 00:24:39.866 --rc geninfo_all_blocks=1 00:24:39.866 --rc geninfo_unexecuted_blocks=1 00:24:39.866 00:24:39.866 ' 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:39.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.866 --rc genhtml_branch_coverage=1 00:24:39.866 --rc genhtml_function_coverage=1 00:24:39.866 --rc genhtml_legend=1 00:24:39.866 --rc geninfo_all_blocks=1 00:24:39.866 --rc geninfo_unexecuted_blocks=1 00:24:39.866 00:24:39.866 ' 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78438 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78438 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 78438 ']' 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:39.866 16:18:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:39.866 [2024-11-04 16:18:58.544353] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:24:39.866 [2024-11-04 16:18:58.544696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78438 ] 00:24:40.124 [2024-11-04 16:18:58.721523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.124 [2024-11-04 16:18:58.830164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.088 16:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:41.088 16:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:24:41.088 16:18:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:41.088 16:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:41.088 16:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:41.088 16:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:41.088 16:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:41.088 16:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:41.346 16:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:41.346 16:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:41.346 16:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:41.346 16:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:24:41.346 16:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:41.346 16:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:41.346 16:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:41.346 16:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:41.604 16:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:41.604 { 00:24:41.604 "name": "nvme0n1", 00:24:41.604 "aliases": [ 00:24:41.604 "ed0819c3-dee8-43f0-898b-33b9e7494a55" 00:24:41.604 ], 00:24:41.604 "product_name": "NVMe disk", 00:24:41.604 "block_size": 4096, 00:24:41.604 "num_blocks": 1310720, 00:24:41.604 "uuid": "ed0819c3-dee8-43f0-898b-33b9e7494a55", 00:24:41.604 "numa_id": -1, 00:24:41.604 "assigned_rate_limits": { 00:24:41.604 "rw_ios_per_sec": 0, 00:24:41.604 "rw_mbytes_per_sec": 0, 00:24:41.604 "r_mbytes_per_sec": 0, 00:24:41.604 "w_mbytes_per_sec": 0 00:24:41.604 }, 00:24:41.604 "claimed": true, 00:24:41.604 "claim_type": "read_many_write_one", 00:24:41.604 "zoned": false, 00:24:41.604 "supported_io_types": { 00:24:41.604 "read": true, 00:24:41.604 "write": true, 00:24:41.604 "unmap": true, 00:24:41.604 "flush": true, 00:24:41.604 "reset": true, 00:24:41.604 "nvme_admin": true, 00:24:41.604 "nvme_io": true, 00:24:41.604 "nvme_io_md": false, 00:24:41.604 "write_zeroes": true, 00:24:41.604 "zcopy": false, 00:24:41.604 "get_zone_info": false, 00:24:41.604 "zone_management": false, 00:24:41.604 "zone_append": false, 00:24:41.604 "compare": true, 00:24:41.604 "compare_and_write": false, 00:24:41.604 "abort": true, 00:24:41.604 "seek_hole": false, 00:24:41.604 "seek_data": false, 00:24:41.604 "copy": true, 00:24:41.604 "nvme_iov_md": false 00:24:41.604 }, 00:24:41.604 "driver_specific": { 00:24:41.604 "nvme": [ 00:24:41.604 { 00:24:41.604 "pci_address": "0000:00:11.0", 00:24:41.604 "trid": { 00:24:41.604 "trtype": "PCIe", 00:24:41.604 "traddr": "0000:00:11.0" 00:24:41.604 }, 00:24:41.604 "ctrlr_data": { 00:24:41.604 "cntlid": 0, 00:24:41.604 "vendor_id": "0x1b36", 00:24:41.604 "model_number": "QEMU NVMe Ctrl", 00:24:41.604 "serial_number": "12341", 00:24:41.604 "firmware_revision": "8.0.0", 00:24:41.604 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:41.604 "oacs": { 00:24:41.605 "security": 0, 00:24:41.605 "format": 1, 00:24:41.605 "firmware": 0, 00:24:41.605 "ns_manage": 1 00:24:41.605 }, 00:24:41.605 "multi_ctrlr": false, 00:24:41.605 "ana_reporting": false 00:24:41.605 }, 00:24:41.605 "vs": { 00:24:41.605 "nvme_version": "1.4" 00:24:41.605 }, 00:24:41.605 "ns_data": { 00:24:41.605 "id": 1, 00:24:41.605 "can_share": false 00:24:41.605 } 00:24:41.605 } 00:24:41.605 ], 00:24:41.605 "mp_policy": "active_passive" 00:24:41.605 } 00:24:41.605 } 00:24:41.605 ]' 00:24:41.605 16:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:41.605 16:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:41.605 16:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:41.605 16:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:24:41.605 16:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:24:41.605 16:19:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:24:41.605 16:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:41.605 16:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:41.605 16:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:41.605 16:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:41.605 16:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:41.863 16:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=886d6167-793d-42f4-b2c0-bc712cd604bc 00:24:41.863 16:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:41.863 16:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 886d6167-793d-42f4-b2c0-bc712cd604bc 00:24:42.123 16:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:42.381 16:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=323048fa-1657-4333-a4c4-269551e56038 00:24:42.381 16:19:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 323048fa-1657-4333-a4c4-269551e56038 00:24:42.382 16:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=4e283fec-a4c4-4f05-ad17-a0ee365fcca6 00:24:42.382 16:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:42.382 16:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4e283fec-a4c4-4f05-ad17-a0ee365fcca6 00:24:42.382 16:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:42.382 16:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:42.382 16:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=4e283fec-a4c4-4f05-ad17-a0ee365fcca6 00:24:42.382 16:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:42.382 16:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4e283fec-a4c4-4f05-ad17-a0ee365fcca6 00:24:42.382 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=4e283fec-a4c4-4f05-ad17-a0ee365fcca6 00:24:42.382 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:42.382 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:42.382 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:42.382 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e283fec-a4c4-4f05-ad17-a0ee365fcca6 00:24:42.640 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:42.641 { 00:24:42.641 "name": "4e283fec-a4c4-4f05-ad17-a0ee365fcca6", 00:24:42.641 "aliases": [ 00:24:42.641 "lvs/nvme0n1p0" 00:24:42.641 ], 00:24:42.641 "product_name": "Logical Volume", 00:24:42.641 "block_size": 4096, 00:24:42.641 "num_blocks": 26476544, 00:24:42.641 "uuid": "4e283fec-a4c4-4f05-ad17-a0ee365fcca6", 00:24:42.641 "assigned_rate_limits": { 00:24:42.641 "rw_ios_per_sec": 0, 00:24:42.641 "rw_mbytes_per_sec": 0, 00:24:42.641 "r_mbytes_per_sec": 0, 00:24:42.641 "w_mbytes_per_sec": 0 00:24:42.641 }, 00:24:42.641 "claimed": false, 00:24:42.641 "zoned": false, 00:24:42.641 "supported_io_types": { 00:24:42.641 "read": true, 00:24:42.641 "write": true, 00:24:42.641 "unmap": true, 00:24:42.641 "flush": false, 00:24:42.641 "reset": true, 00:24:42.641 "nvme_admin": false, 00:24:42.641 "nvme_io": false, 00:24:42.641 "nvme_io_md": false, 00:24:42.641 "write_zeroes": true, 00:24:42.641 "zcopy": false, 00:24:42.641 "get_zone_info": false, 00:24:42.641 "zone_management": false, 00:24:42.641 "zone_append": false, 00:24:42.641 "compare": false, 00:24:42.641 "compare_and_write": false, 00:24:42.641 "abort": false, 00:24:42.641 "seek_hole": true, 00:24:42.641 "seek_data": true, 00:24:42.641 "copy": false, 00:24:42.641 "nvme_iov_md": false 00:24:42.641 }, 00:24:42.641 "driver_specific": { 00:24:42.641 "lvol": { 00:24:42.641 "lvol_store_uuid": "323048fa-1657-4333-a4c4-269551e56038", 00:24:42.641 "base_bdev": "nvme0n1", 00:24:42.641 "thin_provision": true, 00:24:42.641 "num_allocated_clusters": 0, 00:24:42.641 "snapshot": false, 00:24:42.641 "clone": false, 00:24:42.641 "esnap_clone": false 00:24:42.641 } 00:24:42.641 } 00:24:42.641 } 00:24:42.641 ]' 00:24:42.641 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:42.641 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:42.641 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:42.899 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:42.899 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:42.899 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:42.899 16:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:42.899 16:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:42.899 16:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:43.157 16:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:43.157 16:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:43.157 16:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 4e283fec-a4c4-4f05-ad17-a0ee365fcca6 00:24:43.157 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=4e283fec-a4c4-4f05-ad17-a0ee365fcca6 00:24:43.157 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:43.157 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:43.157 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:43.157 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e283fec-a4c4-4f05-ad17-a0ee365fcca6 00:24:43.157 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:43.157 { 00:24:43.157 "name": "4e283fec-a4c4-4f05-ad17-a0ee365fcca6", 00:24:43.157 "aliases": [ 00:24:43.157 "lvs/nvme0n1p0" 00:24:43.157 ], 00:24:43.157 "product_name": "Logical Volume", 00:24:43.157 "block_size": 4096, 00:24:43.157 "num_blocks": 26476544, 00:24:43.157 "uuid": "4e283fec-a4c4-4f05-ad17-a0ee365fcca6", 00:24:43.157 "assigned_rate_limits": { 00:24:43.157 "rw_ios_per_sec": 0, 00:24:43.157 "rw_mbytes_per_sec": 0, 00:24:43.157 "r_mbytes_per_sec": 0, 00:24:43.157 "w_mbytes_per_sec": 0 00:24:43.157 }, 00:24:43.157 "claimed": false, 00:24:43.158 "zoned": false, 00:24:43.158 "supported_io_types": { 00:24:43.158 "read": true, 00:24:43.158 "write": true, 00:24:43.158 "unmap": true, 00:24:43.158 "flush": false, 00:24:43.158 "reset": true, 00:24:43.158 "nvme_admin": false, 00:24:43.158 "nvme_io": false, 00:24:43.158 "nvme_io_md": false, 00:24:43.158 "write_zeroes": true, 00:24:43.158 "zcopy": false, 00:24:43.158 "get_zone_info": false, 00:24:43.158 "zone_management": false, 00:24:43.158 "zone_append": false, 00:24:43.158 "compare": false, 00:24:43.158 "compare_and_write": false, 00:24:43.158 "abort": false, 00:24:43.158 "seek_hole": true, 00:24:43.158 "seek_data": true, 00:24:43.158 "copy": false, 00:24:43.158 "nvme_iov_md": false 00:24:43.158 }, 00:24:43.158 "driver_specific": { 00:24:43.158 "lvol": { 00:24:43.158 "lvol_store_uuid": "323048fa-1657-4333-a4c4-269551e56038", 00:24:43.158 "base_bdev": "nvme0n1", 00:24:43.158 "thin_provision": true, 00:24:43.158 "num_allocated_clusters": 0, 00:24:43.158 "snapshot": false, 00:24:43.158 "clone": false, 00:24:43.158 "esnap_clone": false 00:24:43.158 } 00:24:43.158 } 00:24:43.158 } 00:24:43.158 ]' 00:24:43.158 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:43.416 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:43.416 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:43.416 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:43.416 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:43.416 16:19:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:43.416 16:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:43.416 16:19:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:43.416 16:19:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:43.416 16:19:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 4e283fec-a4c4-4f05-ad17-a0ee365fcca6 00:24:43.416 16:19:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=4e283fec-a4c4-4f05-ad17-a0ee365fcca6 00:24:43.416 16:19:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:43.416 16:19:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:43.416 16:19:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:43.675 16:19:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e283fec-a4c4-4f05-ad17-a0ee365fcca6 00:24:43.675 16:19:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:43.675 { 00:24:43.675 "name": "4e283fec-a4c4-4f05-ad17-a0ee365fcca6", 00:24:43.675 "aliases": [ 00:24:43.675 "lvs/nvme0n1p0" 00:24:43.675 ], 00:24:43.675 "product_name": "Logical Volume", 00:24:43.675 "block_size": 4096, 00:24:43.675 "num_blocks": 26476544, 00:24:43.675 "uuid": "4e283fec-a4c4-4f05-ad17-a0ee365fcca6", 00:24:43.675 "assigned_rate_limits": { 00:24:43.675 "rw_ios_per_sec": 0, 00:24:43.675 "rw_mbytes_per_sec": 0, 00:24:43.675 "r_mbytes_per_sec": 0, 00:24:43.675 "w_mbytes_per_sec": 0 00:24:43.675 }, 00:24:43.675 "claimed": false, 00:24:43.675 "zoned": false, 00:24:43.675 "supported_io_types": { 00:24:43.675 "read": true, 00:24:43.675 "write": true, 00:24:43.675 "unmap": true, 00:24:43.675 "flush": false, 00:24:43.675 "reset": true, 00:24:43.675 "nvme_admin": false, 00:24:43.675 "nvme_io": false, 00:24:43.675 "nvme_io_md": false, 00:24:43.675 "write_zeroes": true, 00:24:43.675 "zcopy": false, 00:24:43.675 "get_zone_info": false, 00:24:43.675 "zone_management": false, 00:24:43.675 "zone_append": false, 00:24:43.675 "compare": false, 00:24:43.675 "compare_and_write": false, 00:24:43.675 "abort": false, 00:24:43.675 "seek_hole": true, 00:24:43.675 "seek_data": true, 00:24:43.675 "copy": false, 00:24:43.675 "nvme_iov_md": false 00:24:43.675 }, 00:24:43.675 "driver_specific": { 00:24:43.675 "lvol": { 00:24:43.675 "lvol_store_uuid": "323048fa-1657-4333-a4c4-269551e56038", 00:24:43.675 "base_bdev": "nvme0n1", 00:24:43.675 "thin_provision": true, 00:24:43.675 "num_allocated_clusters": 0, 00:24:43.675 "snapshot": false, 00:24:43.675 "clone": false, 00:24:43.675 "esnap_clone": false 00:24:43.675 } 00:24:43.675 } 00:24:43.675 } 00:24:43.675 ]' 00:24:43.675 16:19:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:43.675 16:19:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:43.675 16:19:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:43.935 16:19:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:43.935 16:19:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:43.935 16:19:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:43.935 16:19:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:43.935 16:19:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4e283fec-a4c4-4f05-ad17-a0ee365fcca6 --l2p_dram_limit 10' 00:24:43.935 16:19:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:43.935 16:19:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:43.935 16:19:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:43.935 16:19:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4e283fec-a4c4-4f05-ad17-a0ee365fcca6 --l2p_dram_limit 10 -c nvc0n1p0 00:24:43.935 [2024-11-04 16:19:02.628401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.935 [2024-11-04 16:19:02.628449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:43.935 [2024-11-04 16:19:02.628467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:43.935 [2024-11-04 16:19:02.628477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.935 [2024-11-04 16:19:02.628540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.935 [2024-11-04 16:19:02.628551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:43.935 [2024-11-04 16:19:02.628564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:43.935 [2024-11-04 16:19:02.628573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.935 [2024-11-04 16:19:02.628602] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:43.935 [2024-11-04 16:19:02.629517] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:43.935 [2024-11-04 16:19:02.629551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.935 [2024-11-04 16:19:02.629562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:43.935 [2024-11-04 16:19:02.629576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:24:43.935 [2024-11-04 16:19:02.629587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.935 [2024-11-04 16:19:02.629666] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7ec1fce4-e67d-4e51-9974-2fdcba28edba 00:24:43.935 [2024-11-04 16:19:02.631134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.935 [2024-11-04 16:19:02.631159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:43.935 [2024-11-04 16:19:02.631171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:43.935 [2024-11-04 16:19:02.631184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.935 [2024-11-04 16:19:02.638761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.935 [2024-11-04 16:19:02.638917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:43.935 [2024-11-04 16:19:02.639015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.548 ms 00:24:43.936 [2024-11-04 16:19:02.639056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.936 [2024-11-04 16:19:02.639182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.936 [2024-11-04 16:19:02.639222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:43.936 [2024-11-04 16:19:02.639303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:43.936 [2024-11-04 16:19:02.639346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.936 [2024-11-04 16:19:02.639426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.936 [2024-11-04 16:19:02.639464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:43.936 [2024-11-04 16:19:02.639496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:43.936 [2024-11-04 16:19:02.639576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.936 [2024-11-04 16:19:02.639698] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:43.936 [2024-11-04 16:19:02.644926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.936 [2024-11-04 16:19:02.645050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:43.936 [2024-11-04 16:19:02.645143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.242 ms 00:24:43.936 [2024-11-04 16:19:02.645179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.936 [2024-11-04 16:19:02.645251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.936 [2024-11-04 16:19:02.645285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:43.936 [2024-11-04 16:19:02.645453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:43.936 [2024-11-04 16:19:02.645489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.936 [2024-11-04 16:19:02.645552] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:43.936 [2024-11-04 16:19:02.645699] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:43.936 [2024-11-04 16:19:02.645774] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:43.936 [2024-11-04 16:19:02.645893] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:43.936 [2024-11-04 16:19:02.645953] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:43.936 [2024-11-04 16:19:02.646003] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:43.936 [2024-11-04 16:19:02.646117] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:43.936 [2024-11-04 16:19:02.646148] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:43.936 [2024-11-04 16:19:02.646219] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:43.936 [2024-11-04 16:19:02.646252] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:43.936 [2024-11-04 16:19:02.646286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.936 [2024-11-04 16:19:02.646317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:43.936 [2024-11-04 16:19:02.646350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:24:43.936 [2024-11-04 16:19:02.646391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.936 [2024-11-04 16:19:02.646576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.936 [2024-11-04 16:19:02.646629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:43.936 [2024-11-04 16:19:02.646663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:43.936 [2024-11-04 16:19:02.646695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.936 [2024-11-04 16:19:02.646821] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:43.936 [2024-11-04 16:19:02.646914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:43.936 [2024-11-04 16:19:02.646955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.936 [2024-11-04 16:19:02.646987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.936 [2024-11-04 16:19:02.647021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:43.936 [2024-11-04 16:19:02.647050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:43.936 [2024-11-04 16:19:02.647142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:43.936 [2024-11-04 16:19:02.647177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:43.936 [2024-11-04 16:19:02.647212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:43.936 [2024-11-04 16:19:02.647329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.936 [2024-11-04 16:19:02.647367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:43.936 [2024-11-04 16:19:02.647398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:43.936 [2024-11-04 16:19:02.647430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.936 [2024-11-04 16:19:02.647459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:43.936 [2024-11-04 16:19:02.647493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:43.936 [2024-11-04 16:19:02.647523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.936 [2024-11-04 16:19:02.647602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:43.936 [2024-11-04 16:19:02.647637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:43.936 [2024-11-04 16:19:02.647671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.936 [2024-11-04 16:19:02.647711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:43.936 [2024-11-04 16:19:02.647727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:43.936 [2024-11-04 16:19:02.647736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.936 [2024-11-04 16:19:02.647748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:43.936 [2024-11-04 16:19:02.647772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:43.936 [2024-11-04 16:19:02.647785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.936 [2024-11-04 16:19:02.647794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:43.936 [2024-11-04 16:19:02.647806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:43.936 [2024-11-04 16:19:02.647815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.936 [2024-11-04 16:19:02.647827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:43.936 [2024-11-04 16:19:02.647837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:43.936 [2024-11-04 16:19:02.647849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.936 [2024-11-04 16:19:02.647859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:43.936 [2024-11-04 16:19:02.647873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:43.936 [2024-11-04 16:19:02.647885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.936 [2024-11-04 16:19:02.647897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:43.936 [2024-11-04 16:19:02.647906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:43.936 [2024-11-04 16:19:02.647918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.936 [2024-11-04 16:19:02.647927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:43.936 [2024-11-04 16:19:02.647939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:43.936 [2024-11-04 16:19:02.647949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.936 [2024-11-04 16:19:02.647960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:43.936 [2024-11-04 16:19:02.647970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:43.936 [2024-11-04 16:19:02.647981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.936 [2024-11-04 16:19:02.647990] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:43.936 [2024-11-04 16:19:02.648003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:43.936 [2024-11-04 16:19:02.648013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.936 [2024-11-04 16:19:02.648027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.936 [2024-11-04 16:19:02.648037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:43.936 [2024-11-04 16:19:02.648052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:43.936 [2024-11-04 16:19:02.648062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:43.936 [2024-11-04 16:19:02.648074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:43.936 [2024-11-04 16:19:02.648083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:43.936 [2024-11-04 16:19:02.648094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:43.936 [2024-11-04 16:19:02.648109] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:43.936 [2024-11-04 16:19:02.648124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.936 [2024-11-04 16:19:02.648140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:43.936 [2024-11-04 16:19:02.648153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:43.936 [2024-11-04 16:19:02.648163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:43.936 [2024-11-04 16:19:02.648176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:43.936 [2024-11-04 16:19:02.648186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:43.936 [2024-11-04 16:19:02.648199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:43.936 [2024-11-04 16:19:02.648210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:43.936 [2024-11-04 16:19:02.648222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:43.936 [2024-11-04 16:19:02.648233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:43.936 [2024-11-04 16:19:02.648248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:43.936 [2024-11-04 16:19:02.648258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:43.936 [2024-11-04 16:19:02.648271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:43.937 [2024-11-04 16:19:02.648280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:43.937 [2024-11-04 16:19:02.648294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:43.937 [2024-11-04 16:19:02.648304] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:43.937 [2024-11-04 16:19:02.648318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.937 [2024-11-04 16:19:02.648329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:43.937 [2024-11-04 16:19:02.648341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:43.937 [2024-11-04 16:19:02.648351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:43.937 [2024-11-04 16:19:02.648363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:43.937 [2024-11-04 16:19:02.648375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.937 [2024-11-04 16:19:02.648387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:43.937 [2024-11-04 16:19:02.648397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:24:43.937 [2024-11-04 16:19:02.648409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.937 [2024-11-04 16:19:02.648454] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:43.937 [2024-11-04 16:19:02.648472] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:48.127 [2024-11-04 16:19:06.135321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.127 [2024-11-04 16:19:06.135642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:48.127 [2024-11-04 16:19:06.135671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3492.525 ms 00:24:48.127 [2024-11-04 16:19:06.135686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.127 [2024-11-04 16:19:06.171690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.127 [2024-11-04 16:19:06.171757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:48.127 [2024-11-04 16:19:06.171773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.749 ms 00:24:48.127 [2024-11-04 16:19:06.171785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.127 [2024-11-04 16:19:06.171909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.127 [2024-11-04 16:19:06.171926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:48.127 [2024-11-04 16:19:06.171937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:48.127 [2024-11-04 16:19:06.171953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.127 [2024-11-04 16:19:06.218708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.127 [2024-11-04 16:19:06.218760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:48.127 [2024-11-04 16:19:06.218775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.789 ms 00:24:48.127 [2024-11-04 16:19:06.218788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.127 [2024-11-04 16:19:06.218824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.127 [2024-11-04 16:19:06.218842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:48.127 [2024-11-04 16:19:06.218852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:48.127 [2024-11-04 16:19:06.218864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.127 [2024-11-04 16:19:06.219389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.127 [2024-11-04 16:19:06.219416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:48.127 [2024-11-04 16:19:06.219428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:24:48.127 [2024-11-04 16:19:06.219442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.127 [2024-11-04 16:19:06.219539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.127 [2024-11-04 16:19:06.219554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:48.127 [2024-11-04 16:19:06.219567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:24:48.127 [2024-11-04 16:19:06.219582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.127 [2024-11-04 16:19:06.239386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.127 [2024-11-04 16:19:06.239573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:48.127 [2024-11-04 16:19:06.239655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.815 ms 00:24:48.128 [2024-11-04 16:19:06.239696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.128 [2024-11-04 16:19:06.251776] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:48.128 [2024-11-04 16:19:06.255169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.128 [2024-11-04 16:19:06.255197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:48.128 [2024-11-04 16:19:06.255212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.367 ms 00:24:48.128 [2024-11-04 16:19:06.255222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.128 [2024-11-04 16:19:06.367822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.128 [2024-11-04 16:19:06.367892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:48.128 [2024-11-04 16:19:06.367913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.744 ms 00:24:48.128 [2024-11-04 16:19:06.367924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.128 [2024-11-04 16:19:06.368101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.128 [2024-11-04 16:19:06.368118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:48.128 [2024-11-04 16:19:06.368134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:24:48.128 [2024-11-04 16:19:06.368145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.128 [2024-11-04 16:19:06.402900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.128 [2024-11-04 16:19:06.403059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:48.128 [2024-11-04 16:19:06.403086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.754 ms 00:24:48.128 [2024-11-04 16:19:06.403098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.128 [2024-11-04 16:19:06.436712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.128 [2024-11-04 16:19:06.436758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:48.128 [2024-11-04 16:19:06.436775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.621 ms 00:24:48.128 [2024-11-04 16:19:06.436785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.128 [2024-11-04 16:19:06.437525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.128 [2024-11-04 16:19:06.437554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:48.128 [2024-11-04 16:19:06.437569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:24:48.128 [2024-11-04 16:19:06.437580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.128 [2024-11-04 16:19:06.533784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.128 [2024-11-04 16:19:06.533822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:48.128 [2024-11-04 16:19:06.533843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.297 ms 00:24:48.128 [2024-11-04 16:19:06.533854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.128 [2024-11-04 16:19:06.570164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.128 [2024-11-04 16:19:06.570205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:48.128 [2024-11-04 16:19:06.570221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.266 ms 00:24:48.128 [2024-11-04 16:19:06.570232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.128 [2024-11-04 16:19:06.605551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.128 [2024-11-04 16:19:06.605723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:48.128 [2024-11-04 16:19:06.605765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.331 ms 00:24:48.128 [2024-11-04 16:19:06.605777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.128 [2024-11-04 16:19:06.641564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.128 [2024-11-04 16:19:06.641602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:48.128 [2024-11-04 16:19:06.641618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.803 ms 00:24:48.128 [2024-11-04 16:19:06.641628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.128 [2024-11-04 16:19:06.641673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.128 [2024-11-04 16:19:06.641684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:48.128 [2024-11-04 16:19:06.641700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:48.128 [2024-11-04 16:19:06.641710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.128 [2024-11-04 16:19:06.641830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.128 [2024-11-04 16:19:06.641844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:48.128 [2024-11-04 16:19:06.641860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:48.128 [2024-11-04 16:19:06.641869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.128 [2024-11-04 16:19:06.642997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4020.671 ms, result 0 00:24:48.128 { 00:24:48.128 "name": "ftl0", 00:24:48.128 "uuid": "7ec1fce4-e67d-4e51-9974-2fdcba28edba" 00:24:48.128 } 00:24:48.128 16:19:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:48.128 16:19:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:48.387 16:19:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:48.387 16:19:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:48.387 16:19:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:48.387 /dev/nbd0 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:48.646 1+0 records in 00:24:48.646 1+0 records out 00:24:48.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592669 s, 6.9 MB/s 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:24:48.646 16:19:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:48.646 [2024-11-04 16:19:07.240008] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:24:48.646 [2024-11-04 16:19:07.240127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78584 ] 00:24:48.905 [2024-11-04 16:19:07.421606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.905 [2024-11-04 16:19:07.527357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.282  [2024-11-04T16:19:09.940Z] Copying: 214/1024 [MB] (214 MBps) [2024-11-04T16:19:10.876Z] Copying: 431/1024 [MB] (217 MBps) [2024-11-04T16:19:11.840Z] Copying: 648/1024 [MB] (216 MBps) [2024-11-04T16:19:12.775Z] Copying: 855/1024 [MB] (206 MBps) [2024-11-04T16:19:14.151Z] Copying: 1024/1024 [MB] (average 211 MBps) 00:24:55.429 00:24:55.429 16:19:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:56.804 16:19:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:57.064 [2024-11-04 16:19:15.564943] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:24:57.064 [2024-11-04 16:19:15.565054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78673 ] 00:24:57.064 [2024-11-04 16:19:15.744352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.323 [2024-11-04 16:19:15.850911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.698  [2024-11-04T16:19:18.355Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-04T16:19:19.289Z] Copying: 34/1024 [MB] (16 MBps) [2024-11-04T16:19:20.225Z] Copying: 50/1024 [MB] (16 MBps) [2024-11-04T16:19:21.159Z] Copying: 68/1024 [MB] (17 MBps) [2024-11-04T16:19:22.536Z] Copying: 84/1024 [MB] (16 MBps) [2024-11-04T16:19:23.521Z] Copying: 102/1024 [MB] (17 MBps) [2024-11-04T16:19:24.457Z] Copying: 119/1024 [MB] (17 MBps) [2024-11-04T16:19:25.392Z] Copying: 137/1024 [MB] (17 MBps) [2024-11-04T16:19:26.326Z] Copying: 154/1024 [MB] (17 MBps) [2024-11-04T16:19:27.262Z] Copying: 172/1024 [MB] (17 MBps) [2024-11-04T16:19:28.197Z] Copying: 190/1024 [MB] (17 MBps) [2024-11-04T16:19:29.132Z] Copying: 208/1024 [MB] (18 MBps) [2024-11-04T16:19:30.506Z] Copying: 226/1024 [MB] (17 MBps) [2024-11-04T16:19:31.441Z] Copying: 243/1024 [MB] (17 MBps) [2024-11-04T16:19:32.376Z] Copying: 261/1024 [MB] (17 MBps) [2024-11-04T16:19:33.312Z] Copying: 279/1024 [MB] (17 MBps) [2024-11-04T16:19:34.247Z] Copying: 297/1024 [MB] (17 MBps) [2024-11-04T16:19:35.182Z] Copying: 315/1024 [MB] (18 MBps) [2024-11-04T16:19:36.117Z] Copying: 333/1024 [MB] (18 MBps) [2024-11-04T16:19:37.492Z] Copying: 351/1024 [MB] (17 MBps) [2024-11-04T16:19:38.428Z] Copying: 368/1024 [MB] (17 MBps) [2024-11-04T16:19:39.364Z] Copying: 386/1024 [MB] (17 MBps) [2024-11-04T16:19:40.300Z] Copying: 404/1024 [MB] (17 MBps) [2024-11-04T16:19:41.236Z] Copying: 421/1024 [MB] (17 MBps) [2024-11-04T16:19:42.173Z] Copying: 439/1024 [MB] (17 MBps) [2024-11-04T16:19:43.109Z] Copying: 456/1024 [MB] (17 MBps) [2024-11-04T16:19:44.484Z] Copying: 474/1024 [MB] (17 MBps) [2024-11-04T16:19:45.421Z] Copying: 491/1024 [MB] (17 MBps) [2024-11-04T16:19:46.360Z] Copying: 509/1024 [MB] (17 MBps) [2024-11-04T16:19:47.296Z] Copying: 526/1024 [MB] (17 MBps) [2024-11-04T16:19:48.233Z] Copying: 544/1024 [MB] (17 MBps) [2024-11-04T16:19:49.169Z] Copying: 562/1024 [MB] (17 MBps) [2024-11-04T16:19:50.124Z] Copying: 580/1024 [MB] (17 MBps) [2024-11-04T16:19:51.500Z] Copying: 598/1024 [MB] (17 MBps) [2024-11-04T16:19:52.437Z] Copying: 616/1024 [MB] (18 MBps) [2024-11-04T16:19:53.374Z] Copying: 633/1024 [MB] (17 MBps) [2024-11-04T16:19:54.311Z] Copying: 651/1024 [MB] (17 MBps) [2024-11-04T16:19:55.248Z] Copying: 668/1024 [MB] (17 MBps) [2024-11-04T16:19:56.184Z] Copying: 686/1024 [MB] (17 MBps) [2024-11-04T16:19:57.120Z] Copying: 704/1024 [MB] (17 MBps) [2024-11-04T16:19:58.512Z] Copying: 722/1024 [MB] (17 MBps) [2024-11-04T16:19:59.077Z] Copying: 739/1024 [MB] (17 MBps) [2024-11-04T16:20:00.450Z] Copying: 757/1024 [MB] (17 MBps) [2024-11-04T16:20:01.386Z] Copying: 774/1024 [MB] (17 MBps) [2024-11-04T16:20:02.320Z] Copying: 792/1024 [MB] (17 MBps) [2024-11-04T16:20:03.255Z] Copying: 809/1024 [MB] (17 MBps) [2024-11-04T16:20:04.191Z] Copying: 826/1024 [MB] (17 MBps) [2024-11-04T16:20:05.126Z] Copying: 843/1024 [MB] (17 MBps) [2024-11-04T16:20:06.061Z] Copying: 860/1024 [MB] (17 MBps) [2024-11-04T16:20:07.437Z] Copying: 878/1024 [MB] (17 MBps) [2024-11-04T16:20:08.372Z] Copying: 895/1024 [MB] (17 MBps) [2024-11-04T16:20:09.342Z] Copying: 913/1024 [MB] (17 MBps) [2024-11-04T16:20:10.277Z] Copying: 930/1024 [MB] (17 MBps) [2024-11-04T16:20:11.212Z] Copying: 947/1024 [MB] (17 MBps) [2024-11-04T16:20:12.148Z] Copying: 965/1024 [MB] (17 MBps) [2024-11-04T16:20:13.085Z] Copying: 983/1024 [MB] (17 MBps) [2024-11-04T16:20:14.462Z] Copying: 1000/1024 [MB] (17 MBps) [2024-11-04T16:20:14.462Z] Copying: 1018/1024 [MB] (17 MBps) [2024-11-04T16:20:15.398Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:25:56.676 00:25:56.935 16:20:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:56.935 16:20:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:56.935 16:20:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:57.195 [2024-11-04 16:20:15.801068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.195 [2024-11-04 16:20:15.801123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:57.195 [2024-11-04 16:20:15.801139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:57.195 [2024-11-04 16:20:15.801153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.195 [2024-11-04 16:20:15.801178] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:57.195 [2024-11-04 16:20:15.805427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.195 [2024-11-04 16:20:15.805461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:57.195 [2024-11-04 16:20:15.805477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.230 ms 00:25:57.195 [2024-11-04 16:20:15.805488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.195 [2024-11-04 16:20:15.807614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.195 [2024-11-04 16:20:15.807655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:57.195 [2024-11-04 16:20:15.807672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:25:57.195 [2024-11-04 16:20:15.807682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.195 [2024-11-04 16:20:15.825515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.195 [2024-11-04 16:20:15.825559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:57.195 [2024-11-04 16:20:15.825575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.835 ms 00:25:57.195 [2024-11-04 16:20:15.825586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.195 [2024-11-04 16:20:15.830565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.195 [2024-11-04 16:20:15.830605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:57.195 [2024-11-04 16:20:15.830637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.944 ms 00:25:57.195 [2024-11-04 16:20:15.830647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.195 [2024-11-04 16:20:15.865679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.195 [2024-11-04 16:20:15.865735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:57.195 [2024-11-04 16:20:15.865785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.003 ms 00:25:57.195 [2024-11-04 16:20:15.865796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.195 [2024-11-04 16:20:15.887177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.195 [2024-11-04 16:20:15.887217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:57.195 [2024-11-04 16:20:15.887234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.367 ms 00:25:57.195 [2024-11-04 16:20:15.887247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.195 [2024-11-04 16:20:15.887407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.195 [2024-11-04 16:20:15.887420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:57.195 [2024-11-04 16:20:15.887433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:25:57.195 [2024-11-04 16:20:15.887443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.454 [2024-11-04 16:20:15.922221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.454 [2024-11-04 16:20:15.922259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:57.454 [2024-11-04 16:20:15.922274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.814 ms 00:25:57.454 [2024-11-04 16:20:15.922283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.454 [2024-11-04 16:20:15.956175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.454 [2024-11-04 16:20:15.956212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:57.454 [2024-11-04 16:20:15.956229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.904 ms 00:25:57.454 [2024-11-04 16:20:15.956238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.454 [2024-11-04 16:20:15.989775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.454 [2024-11-04 16:20:15.989808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:57.454 [2024-11-04 16:20:15.989823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.543 ms 00:25:57.454 [2024-11-04 16:20:15.989849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.454 [2024-11-04 16:20:16.023243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.454 [2024-11-04 16:20:16.023280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:57.454 [2024-11-04 16:20:16.023295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.351 ms 00:25:57.454 [2024-11-04 16:20:16.023320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.454 [2024-11-04 16:20:16.023363] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:57.455 [2024-11-04 16:20:16.023379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.023980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:57.455 [2024-11-04 16:20:16.024263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:57.456 [2024-11-04 16:20:16.024651] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:57.456 [2024-11-04 16:20:16.024664] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ec1fce4-e67d-4e51-9974-2fdcba28edba 00:25:57.456 [2024-11-04 16:20:16.024675] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:57.456 [2024-11-04 16:20:16.024690] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:57.456 [2024-11-04 16:20:16.024700] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:57.456 [2024-11-04 16:20:16.024723] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:57.456 [2024-11-04 16:20:16.024733] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:57.456 [2024-11-04 16:20:16.024755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:57.456 [2024-11-04 16:20:16.024766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:57.456 [2024-11-04 16:20:16.024777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:57.456 [2024-11-04 16:20:16.024786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:57.456 [2024-11-04 16:20:16.024799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.456 [2024-11-04 16:20:16.024809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:57.456 [2024-11-04 16:20:16.024822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:25:57.456 [2024-11-04 16:20:16.024832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.456 [2024-11-04 16:20:16.044260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.456 [2024-11-04 16:20:16.044294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:57.456 [2024-11-04 16:20:16.044312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.402 ms 00:25:57.456 [2024-11-04 16:20:16.044322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.456 [2024-11-04 16:20:16.044883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.456 [2024-11-04 16:20:16.044896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:57.456 [2024-11-04 16:20:16.044909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:25:57.456 [2024-11-04 16:20:16.044919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.456 [2024-11-04 16:20:16.105616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.456 [2024-11-04 16:20:16.105654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.456 [2024-11-04 16:20:16.105670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.456 [2024-11-04 16:20:16.105679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.456 [2024-11-04 16:20:16.105738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.456 [2024-11-04 16:20:16.105759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.456 [2024-11-04 16:20:16.105789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.456 [2024-11-04 16:20:16.105798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.456 [2024-11-04 16:20:16.105890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.456 [2024-11-04 16:20:16.105903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.456 [2024-11-04 16:20:16.105919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.456 [2024-11-04 16:20:16.105945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.456 [2024-11-04 16:20:16.105969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.456 [2024-11-04 16:20:16.105980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.456 [2024-11-04 16:20:16.105993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.456 [2024-11-04 16:20:16.106003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.716 [2024-11-04 16:20:16.221808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.716 [2024-11-04 16:20:16.221861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.716 [2024-11-04 16:20:16.221878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.716 [2024-11-04 16:20:16.221888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.716 [2024-11-04 16:20:16.315295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.716 [2024-11-04 16:20:16.315342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.716 [2024-11-04 16:20:16.315358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.716 [2024-11-04 16:20:16.315368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.716 [2024-11-04 16:20:16.315471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.716 [2024-11-04 16:20:16.315482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.716 [2024-11-04 16:20:16.315495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.716 [2024-11-04 16:20:16.315507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.716 [2024-11-04 16:20:16.315561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.716 [2024-11-04 16:20:16.315572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.716 [2024-11-04 16:20:16.315585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.716 [2024-11-04 16:20:16.315594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.716 [2024-11-04 16:20:16.315692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.716 [2024-11-04 16:20:16.315704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.716 [2024-11-04 16:20:16.315716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.716 [2024-11-04 16:20:16.315725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.716 [2024-11-04 16:20:16.315809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.716 [2024-11-04 16:20:16.315822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:57.716 [2024-11-04 16:20:16.315834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.716 [2024-11-04 16:20:16.315844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.716 [2024-11-04 16:20:16.315887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.716 [2024-11-04 16:20:16.315897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.716 [2024-11-04 16:20:16.315926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.716 [2024-11-04 16:20:16.315937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.716 [2024-11-04 16:20:16.315988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.716 [2024-11-04 16:20:16.316001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.716 [2024-11-04 16:20:16.316014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.716 [2024-11-04 16:20:16.316024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.716 [2024-11-04 16:20:16.316186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 515.895 ms, result 0 00:25:57.716 true 00:25:57.716 16:20:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78438 00:25:57.716 16:20:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78438 00:25:57.716 16:20:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:57.975 [2024-11-04 16:20:16.443465] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:25:57.975 [2024-11-04 16:20:16.443589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79292 ] 00:25:57.975 [2024-11-04 16:20:16.632352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.235 [2024-11-04 16:20:16.738122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.612  [2024-11-04T16:20:19.269Z] Copying: 214/1024 [MB] (214 MBps) [2024-11-04T16:20:20.207Z] Copying: 429/1024 [MB] (215 MBps) [2024-11-04T16:20:21.143Z] Copying: 647/1024 [MB] (218 MBps) [2024-11-04T16:20:22.080Z] Copying: 857/1024 [MB] (209 MBps) [2024-11-04T16:20:23.017Z] Copying: 1024/1024 [MB] (average 213 MBps) 00:26:04.295 00:26:04.295 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78438 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:04.295 16:20:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:04.554 [2024-11-04 16:20:23.024884] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:26:04.554 [2024-11-04 16:20:23.025394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79361 ] 00:26:04.554 [2024-11-04 16:20:23.207460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.813 [2024-11-04 16:20:23.312680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.072 [2024-11-04 16:20:23.662523] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:05.072 [2024-11-04 16:20:23.662613] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:05.072 [2024-11-04 16:20:23.728586] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:05.072 [2024-11-04 16:20:23.728914] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:05.072 [2024-11-04 16:20:23.729138] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:05.331 [2024-11-04 16:20:24.046162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.331 [2024-11-04 16:20:24.046371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:05.331 [2024-11-04 16:20:24.046396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:05.331 [2024-11-04 16:20:24.046409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.331 [2024-11-04 16:20:24.046476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.331 [2024-11-04 16:20:24.046489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:05.331 [2024-11-04 16:20:24.046501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:05.331 [2024-11-04 16:20:24.046511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.331 [2024-11-04 16:20:24.046535] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:05.331 [2024-11-04 16:20:24.047613] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:05.331 [2024-11-04 16:20:24.047636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.331 [2024-11-04 16:20:24.047646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:05.331 [2024-11-04 16:20:24.047658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:26:05.331 [2024-11-04 16:20:24.047668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.331 [2024-11-04 16:20:24.049143] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:05.592 [2024-11-04 16:20:24.067490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.592 [2024-11-04 16:20:24.067634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:05.592 [2024-11-04 16:20:24.067655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.377 ms 00:26:05.592 [2024-11-04 16:20:24.067666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.592 [2024-11-04 16:20:24.067723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.592 [2024-11-04 16:20:24.067736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:05.592 [2024-11-04 16:20:24.067769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:05.592 [2024-11-04 16:20:24.067781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.592 [2024-11-04 16:20:24.074711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.592 [2024-11-04 16:20:24.074871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:05.592 [2024-11-04 16:20:24.074908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.867 ms 00:26:05.592 [2024-11-04 16:20:24.074919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.592 [2024-11-04 16:20:24.075003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.592 [2024-11-04 16:20:24.075016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:05.592 [2024-11-04 16:20:24.075027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:05.592 [2024-11-04 16:20:24.075037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.592 [2024-11-04 16:20:24.075078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.592 [2024-11-04 16:20:24.075093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:05.592 [2024-11-04 16:20:24.075104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:05.592 [2024-11-04 16:20:24.075114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.592 [2024-11-04 16:20:24.075137] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:05.592 [2024-11-04 16:20:24.079848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.592 [2024-11-04 16:20:24.079878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:05.592 [2024-11-04 16:20:24.079890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.723 ms 00:26:05.592 [2024-11-04 16:20:24.079915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.592 [2024-11-04 16:20:24.079946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.592 [2024-11-04 16:20:24.079956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:05.592 [2024-11-04 16:20:24.079966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:05.592 [2024-11-04 16:20:24.079976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.592 [2024-11-04 16:20:24.080027] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:05.592 [2024-11-04 16:20:24.080065] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:05.592 [2024-11-04 16:20:24.080099] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:05.592 [2024-11-04 16:20:24.080117] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:05.592 [2024-11-04 16:20:24.080204] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:05.592 [2024-11-04 16:20:24.080218] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:05.592 [2024-11-04 16:20:24.080230] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:05.592 [2024-11-04 16:20:24.080243] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:05.592 [2024-11-04 16:20:24.080258] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:05.592 [2024-11-04 16:20:24.080269] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:05.592 [2024-11-04 16:20:24.080279] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:05.592 [2024-11-04 16:20:24.080288] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:05.592 [2024-11-04 16:20:24.080298] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:05.592 [2024-11-04 16:20:24.080308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.592 [2024-11-04 16:20:24.080318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:05.592 [2024-11-04 16:20:24.080329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:26:05.592 [2024-11-04 16:20:24.080338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.592 [2024-11-04 16:20:24.080408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.592 [2024-11-04 16:20:24.080422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:05.592 [2024-11-04 16:20:24.080432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:05.592 [2024-11-04 16:20:24.080442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.592 [2024-11-04 16:20:24.080531] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:05.592 [2024-11-04 16:20:24.080546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:05.592 [2024-11-04 16:20:24.080556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.592 [2024-11-04 16:20:24.080567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.592 [2024-11-04 16:20:24.080577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:05.592 [2024-11-04 16:20:24.080586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:05.592 [2024-11-04 16:20:24.080596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:05.592 [2024-11-04 16:20:24.080606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:05.592 [2024-11-04 16:20:24.080616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:05.592 [2024-11-04 16:20:24.080625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.592 [2024-11-04 16:20:24.080635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:05.592 [2024-11-04 16:20:24.080654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:05.592 [2024-11-04 16:20:24.080664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.592 [2024-11-04 16:20:24.080673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:05.592 [2024-11-04 16:20:24.080683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:05.592 [2024-11-04 16:20:24.080692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.592 [2024-11-04 16:20:24.080701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:05.592 [2024-11-04 16:20:24.080710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:05.592 [2024-11-04 16:20:24.080719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.592 [2024-11-04 16:20:24.080728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:05.592 [2024-11-04 16:20:24.080737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:05.592 [2024-11-04 16:20:24.080770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.592 [2024-11-04 16:20:24.080781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:05.592 [2024-11-04 16:20:24.080790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:05.592 [2024-11-04 16:20:24.080799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.592 [2024-11-04 16:20:24.080808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:05.592 [2024-11-04 16:20:24.080817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:05.592 [2024-11-04 16:20:24.080826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.592 [2024-11-04 16:20:24.080835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:05.592 [2024-11-04 16:20:24.080844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:05.592 [2024-11-04 16:20:24.080853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.592 [2024-11-04 16:20:24.080862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:05.592 [2024-11-04 16:20:24.080871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:05.592 [2024-11-04 16:20:24.080880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.592 [2024-11-04 16:20:24.080890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:05.592 [2024-11-04 16:20:24.080899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:05.592 [2024-11-04 16:20:24.080907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.592 [2024-11-04 16:20:24.080916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:05.592 [2024-11-04 16:20:24.080925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:05.592 [2024-11-04 16:20:24.080934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.592 [2024-11-04 16:20:24.080943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:05.592 [2024-11-04 16:20:24.080953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:05.592 [2024-11-04 16:20:24.080963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.592 [2024-11-04 16:20:24.080972] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:05.592 [2024-11-04 16:20:24.080982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:05.593 [2024-11-04 16:20:24.080991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.593 [2024-11-04 16:20:24.081005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.593 [2024-11-04 16:20:24.081015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:05.593 [2024-11-04 16:20:24.081025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:05.593 [2024-11-04 16:20:24.081046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:05.593 [2024-11-04 16:20:24.081056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:05.593 [2024-11-04 16:20:24.081066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:05.593 [2024-11-04 16:20:24.081075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:05.593 [2024-11-04 16:20:24.081086] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:05.593 [2024-11-04 16:20:24.081098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.593 [2024-11-04 16:20:24.081109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:05.593 [2024-11-04 16:20:24.081119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:05.593 [2024-11-04 16:20:24.081130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:05.593 [2024-11-04 16:20:24.081142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:05.593 [2024-11-04 16:20:24.081153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:05.593 [2024-11-04 16:20:24.081163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:05.593 [2024-11-04 16:20:24.081173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:05.593 [2024-11-04 16:20:24.081184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:05.593 [2024-11-04 16:20:24.081194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:05.593 [2024-11-04 16:20:24.081204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:05.593 [2024-11-04 16:20:24.081214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:05.593 [2024-11-04 16:20:24.081225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:05.593 [2024-11-04 16:20:24.081235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:05.593 [2024-11-04 16:20:24.081246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:05.593 [2024-11-04 16:20:24.081256] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:05.593 [2024-11-04 16:20:24.081267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.593 [2024-11-04 16:20:24.081278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:05.593 [2024-11-04 16:20:24.081288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:05.593 [2024-11-04 16:20:24.081298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:05.593 [2024-11-04 16:20:24.081309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:05.593 [2024-11-04 16:20:24.081320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.593 [2024-11-04 16:20:24.081330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:05.593 [2024-11-04 16:20:24.081340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:26:05.593 [2024-11-04 16:20:24.081349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.593 [2024-11-04 16:20:24.118235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.593 [2024-11-04 16:20:24.118403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:05.593 [2024-11-04 16:20:24.118424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.899 ms 00:26:05.593 [2024-11-04 16:20:24.118436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.593 [2024-11-04 16:20:24.118528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.593 [2024-11-04 16:20:24.118546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:05.593 [2024-11-04 16:20:24.118557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:05.593 [2024-11-04 16:20:24.118568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.593 [2024-11-04 16:20:24.172569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.593 [2024-11-04 16:20:24.172604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:05.593 [2024-11-04 16:20:24.172617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.023 ms 00:26:05.593 [2024-11-04 16:20:24.172631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.593 [2024-11-04 16:20:24.172662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.593 [2024-11-04 16:20:24.172673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:05.593 [2024-11-04 16:20:24.172684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:05.593 [2024-11-04 16:20:24.172693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.593 [2024-11-04 16:20:24.173211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.593 [2024-11-04 16:20:24.173227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:05.593 [2024-11-04 16:20:24.173239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:26:05.593 [2024-11-04 16:20:24.173249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.593 [2024-11-04 16:20:24.173371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.593 [2024-11-04 16:20:24.173392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:05.593 [2024-11-04 16:20:24.173403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:05.593 [2024-11-04 16:20:24.173413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.593 [2024-11-04 16:20:24.192075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.593 [2024-11-04 16:20:24.192207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:05.593 [2024-11-04 16:20:24.192343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.673 ms 00:26:05.593 [2024-11-04 16:20:24.192380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.593 [2024-11-04 16:20:24.210669] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:05.593 [2024-11-04 16:20:24.210867] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:05.593 [2024-11-04 16:20:24.210994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.593 [2024-11-04 16:20:24.211029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:05.593 [2024-11-04 16:20:24.211059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.525 ms 00:26:05.593 [2024-11-04 16:20:24.211089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.593 [2024-11-04 16:20:24.239396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.593 [2024-11-04 16:20:24.239525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:05.593 [2024-11-04 16:20:24.239696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.294 ms 00:26:05.593 [2024-11-04 16:20:24.239733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.593 [2024-11-04 16:20:24.256843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.593 [2024-11-04 16:20:24.256972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:05.593 [2024-11-04 16:20:24.257062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.045 ms 00:26:05.593 [2024-11-04 16:20:24.257096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.593 [2024-11-04 16:20:24.274032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.593 [2024-11-04 16:20:24.274155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:05.593 [2024-11-04 16:20:24.274250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.908 ms 00:26:05.593 [2024-11-04 16:20:24.274284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.593 [2024-11-04 16:20:24.275042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.593 [2024-11-04 16:20:24.275164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:05.593 [2024-11-04 16:20:24.275259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.633 ms 00:26:05.593 [2024-11-04 16:20:24.275294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.852 [2024-11-04 16:20:24.356216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.852 [2024-11-04 16:20:24.356455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:05.852 [2024-11-04 16:20:24.356478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.024 ms 00:26:05.852 [2024-11-04 16:20:24.356490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.852 [2024-11-04 16:20:24.366782] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:05.852 [2024-11-04 16:20:24.369253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.852 [2024-11-04 16:20:24.369282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:05.852 [2024-11-04 16:20:24.369295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.738 ms 00:26:05.852 [2024-11-04 16:20:24.369305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.852 [2024-11-04 16:20:24.369383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.852 [2024-11-04 16:20:24.369395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:05.852 [2024-11-04 16:20:24.369406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:05.852 [2024-11-04 16:20:24.369416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.852 [2024-11-04 16:20:24.369483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.852 [2024-11-04 16:20:24.369495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:05.852 [2024-11-04 16:20:24.369506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:05.852 [2024-11-04 16:20:24.369515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.852 [2024-11-04 16:20:24.369535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.852 [2024-11-04 16:20:24.369549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:05.853 [2024-11-04 16:20:24.369559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:05.853 [2024-11-04 16:20:24.369569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.853 [2024-11-04 16:20:24.369601] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:05.853 [2024-11-04 16:20:24.369613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.853 [2024-11-04 16:20:24.369623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:05.853 [2024-11-04 16:20:24.369632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:05.853 [2024-11-04 16:20:24.369642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.853 [2024-11-04 16:20:24.403634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.853 [2024-11-04 16:20:24.403674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:05.853 [2024-11-04 16:20:24.403687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.026 ms 00:26:05.853 [2024-11-04 16:20:24.403698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.853 [2024-11-04 16:20:24.403804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.853 [2024-11-04 16:20:24.403818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:05.853 [2024-11-04 16:20:24.403829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:05.853 [2024-11-04 16:20:24.403839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.853 [2024-11-04 16:20:24.404902] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 358.881 ms, result 0 00:26:06.788  [2024-11-04T16:20:26.449Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-04T16:20:27.828Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-04T16:20:28.764Z] Copying: 73/1024 [MB] (23 MBps) [2024-11-04T16:20:29.702Z] Copying: 96/1024 [MB] (22 MBps) [2024-11-04T16:20:30.639Z] Copying: 120/1024 [MB] (24 MBps) [2024-11-04T16:20:31.577Z] Copying: 146/1024 [MB] (26 MBps) [2024-11-04T16:20:32.514Z] Copying: 171/1024 [MB] (24 MBps) [2024-11-04T16:20:33.450Z] Copying: 193/1024 [MB] (22 MBps) [2024-11-04T16:20:34.826Z] Copying: 218/1024 [MB] (24 MBps) [2024-11-04T16:20:35.763Z] Copying: 243/1024 [MB] (24 MBps) [2024-11-04T16:20:36.699Z] Copying: 267/1024 [MB] (24 MBps) [2024-11-04T16:20:37.636Z] Copying: 291/1024 [MB] (24 MBps) [2024-11-04T16:20:38.572Z] Copying: 313/1024 [MB] (21 MBps) [2024-11-04T16:20:39.509Z] Copying: 336/1024 [MB] (23 MBps) [2024-11-04T16:20:40.445Z] Copying: 361/1024 [MB] (24 MBps) [2024-11-04T16:20:41.824Z] Copying: 386/1024 [MB] (24 MBps) [2024-11-04T16:20:42.486Z] Copying: 410/1024 [MB] (23 MBps) [2024-11-04T16:20:43.422Z] Copying: 434/1024 [MB] (23 MBps) [2024-11-04T16:20:44.800Z] Copying: 459/1024 [MB] (25 MBps) [2024-11-04T16:20:45.736Z] Copying: 485/1024 [MB] (25 MBps) [2024-11-04T16:20:46.674Z] Copying: 510/1024 [MB] (25 MBps) [2024-11-04T16:20:47.610Z] Copying: 535/1024 [MB] (25 MBps) [2024-11-04T16:20:48.547Z] Copying: 560/1024 [MB] (25 MBps) [2024-11-04T16:20:49.486Z] Copying: 586/1024 [MB] (25 MBps) [2024-11-04T16:20:50.424Z] Copying: 611/1024 [MB] (24 MBps) [2024-11-04T16:20:51.801Z] Copying: 635/1024 [MB] (24 MBps) [2024-11-04T16:20:52.737Z] Copying: 659/1024 [MB] (23 MBps) [2024-11-04T16:20:53.675Z] Copying: 683/1024 [MB] (23 MBps) [2024-11-04T16:20:54.642Z] Copying: 707/1024 [MB] (23 MBps) [2024-11-04T16:20:55.580Z] Copying: 731/1024 [MB] (23 MBps) [2024-11-04T16:20:56.518Z] Copying: 755/1024 [MB] (24 MBps) [2024-11-04T16:20:57.456Z] Copying: 778/1024 [MB] (23 MBps) [2024-11-04T16:20:58.393Z] Copying: 802/1024 [MB] (23 MBps) [2024-11-04T16:20:59.772Z] Copying: 827/1024 [MB] (24 MBps) [2024-11-04T16:21:00.708Z] Copying: 851/1024 [MB] (24 MBps) [2024-11-04T16:21:01.646Z] Copying: 875/1024 [MB] (24 MBps) [2024-11-04T16:21:02.583Z] Copying: 901/1024 [MB] (25 MBps) [2024-11-04T16:21:03.519Z] Copying: 923/1024 [MB] (22 MBps) [2024-11-04T16:21:04.456Z] Copying: 947/1024 [MB] (23 MBps) [2024-11-04T16:21:05.399Z] Copying: 971/1024 [MB] (24 MBps) [2024-11-04T16:21:06.361Z] Copying: 996/1024 [MB] (25 MBps) [2024-11-04T16:21:07.297Z] Copying: 1021/1024 [MB] (24 MBps) [2024-11-04T16:21:07.297Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-04 16:21:07.180689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.575 [2024-11-04 16:21:07.180756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:48.575 [2024-11-04 16:21:07.180788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:48.575 [2024-11-04 16:21:07.180799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.575 [2024-11-04 16:21:07.182135] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:48.575 [2024-11-04 16:21:07.187617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.575 [2024-11-04 16:21:07.187657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:48.575 [2024-11-04 16:21:07.187681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.458 ms 00:26:48.575 [2024-11-04 16:21:07.187691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.575 [2024-11-04 16:21:07.198542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.575 [2024-11-04 16:21:07.198584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:48.575 [2024-11-04 16:21:07.198623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.394 ms 00:26:48.575 [2024-11-04 16:21:07.198633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.575 [2024-11-04 16:21:07.222487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.575 [2024-11-04 16:21:07.222539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:48.575 [2024-11-04 16:21:07.222554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.863 ms 00:26:48.575 [2024-11-04 16:21:07.222566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.575 [2024-11-04 16:21:07.227389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.575 [2024-11-04 16:21:07.227428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:48.575 [2024-11-04 16:21:07.227439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.790 ms 00:26:48.575 [2024-11-04 16:21:07.227449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.575 [2024-11-04 16:21:07.261537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.575 [2024-11-04 16:21:07.261575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:48.575 [2024-11-04 16:21:07.261587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.104 ms 00:26:48.576 [2024-11-04 16:21:07.261613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.576 [2024-11-04 16:21:07.281432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.576 [2024-11-04 16:21:07.281469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:48.576 [2024-11-04 16:21:07.281482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.813 ms 00:26:48.576 [2024-11-04 16:21:07.281491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.836 [2024-11-04 16:21:07.396387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.836 [2024-11-04 16:21:07.396537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:48.836 [2024-11-04 16:21:07.396623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.041 ms 00:26:48.836 [2024-11-04 16:21:07.396676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.836 [2024-11-04 16:21:07.433666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.836 [2024-11-04 16:21:07.433704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:48.836 [2024-11-04 16:21:07.433718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.005 ms 00:26:48.836 [2024-11-04 16:21:07.433728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.836 [2024-11-04 16:21:07.469555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.836 [2024-11-04 16:21:07.469595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:48.836 [2024-11-04 16:21:07.469607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.818 ms 00:26:48.836 [2024-11-04 16:21:07.469633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.836 [2024-11-04 16:21:07.503853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.836 [2024-11-04 16:21:07.503890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:48.836 [2024-11-04 16:21:07.503903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.237 ms 00:26:48.836 [2024-11-04 16:21:07.503928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.836 [2024-11-04 16:21:07.538138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.836 [2024-11-04 16:21:07.538176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:48.836 [2024-11-04 16:21:07.538189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.189 ms 00:26:48.836 [2024-11-04 16:21:07.538199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.836 [2024-11-04 16:21:07.538235] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:48.836 [2024-11-04 16:21:07.538251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107520 / 261120 wr_cnt: 1 state: open 00:26:48.836 [2024-11-04 16:21:07.538264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:48.836 [2024-11-04 16:21:07.538531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.538999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:48.837 [2024-11-04 16:21:07.539374] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:48.837 [2024-11-04 16:21:07.539384] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ec1fce4-e67d-4e51-9974-2fdcba28edba 00:26:48.837 [2024-11-04 16:21:07.539396] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107520 00:26:48.837 [2024-11-04 16:21:07.539411] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108480 00:26:48.837 [2024-11-04 16:21:07.539431] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107520 00:26:48.837 [2024-11-04 16:21:07.539442] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:26:48.837 [2024-11-04 16:21:07.539452] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:48.837 [2024-11-04 16:21:07.539463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:48.837 [2024-11-04 16:21:07.539473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:48.837 [2024-11-04 16:21:07.539482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:48.837 [2024-11-04 16:21:07.539491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:48.837 [2024-11-04 16:21:07.539501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.837 [2024-11-04 16:21:07.539515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:48.837 [2024-11-04 16:21:07.539525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:26:48.837 [2024-11-04 16:21:07.539535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.097 [2024-11-04 16:21:07.558852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.097 [2024-11-04 16:21:07.558888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:49.097 [2024-11-04 16:21:07.558901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.313 ms 00:26:49.097 [2024-11-04 16:21:07.558911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.097 [2024-11-04 16:21:07.559399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.097 [2024-11-04 16:21:07.559410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:49.097 [2024-11-04 16:21:07.559420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:26:49.097 [2024-11-04 16:21:07.559430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.097 [2024-11-04 16:21:07.609312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.097 [2024-11-04 16:21:07.609349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:49.097 [2024-11-04 16:21:07.609360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.097 [2024-11-04 16:21:07.609386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.097 [2024-11-04 16:21:07.609438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.097 [2024-11-04 16:21:07.609449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:49.097 [2024-11-04 16:21:07.609459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.097 [2024-11-04 16:21:07.609469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.097 [2024-11-04 16:21:07.609539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.097 [2024-11-04 16:21:07.609553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:49.097 [2024-11-04 16:21:07.609564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.097 [2024-11-04 16:21:07.609573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.097 [2024-11-04 16:21:07.609590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.097 [2024-11-04 16:21:07.609601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:49.097 [2024-11-04 16:21:07.609610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.097 [2024-11-04 16:21:07.609620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.097 [2024-11-04 16:21:07.725283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.097 [2024-11-04 16:21:07.725334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:49.097 [2024-11-04 16:21:07.725348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.097 [2024-11-04 16:21:07.725359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.356 [2024-11-04 16:21:07.820055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.356 [2024-11-04 16:21:07.820105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:49.356 [2024-11-04 16:21:07.820118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.356 [2024-11-04 16:21:07.820128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.356 [2024-11-04 16:21:07.820216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.356 [2024-11-04 16:21:07.820228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:49.356 [2024-11-04 16:21:07.820238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.356 [2024-11-04 16:21:07.820248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.356 [2024-11-04 16:21:07.820282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.356 [2024-11-04 16:21:07.820293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:49.356 [2024-11-04 16:21:07.820314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.356 [2024-11-04 16:21:07.820324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.356 [2024-11-04 16:21:07.820426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.356 [2024-11-04 16:21:07.820442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:49.356 [2024-11-04 16:21:07.820452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.356 [2024-11-04 16:21:07.820461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.356 [2024-11-04 16:21:07.820494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.356 [2024-11-04 16:21:07.820506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:49.356 [2024-11-04 16:21:07.820515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.356 [2024-11-04 16:21:07.820525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.356 [2024-11-04 16:21:07.820560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.356 [2024-11-04 16:21:07.820574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:49.356 [2024-11-04 16:21:07.820583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.356 [2024-11-04 16:21:07.820593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.356 [2024-11-04 16:21:07.820631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.356 [2024-11-04 16:21:07.820642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:49.356 [2024-11-04 16:21:07.820652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.356 [2024-11-04 16:21:07.820661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.356 [2024-11-04 16:21:07.820816] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 643.198 ms, result 0 00:26:50.734 00:26:50.734 00:26:50.992 16:21:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:52.369 16:21:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:52.628 [2024-11-04 16:21:11.140077] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:26:52.628 [2024-11-04 16:21:11.140357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79851 ] 00:26:52.628 [2024-11-04 16:21:11.322537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.886 [2024-11-04 16:21:11.429685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.145 [2024-11-04 16:21:11.774927] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:53.145 [2024-11-04 16:21:11.775229] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:53.405 [2024-11-04 16:21:11.936572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.405 [2024-11-04 16:21:11.936804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:53.405 [2024-11-04 16:21:11.936856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:53.405 [2024-11-04 16:21:11.936869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.405 [2024-11-04 16:21:11.936933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.405 [2024-11-04 16:21:11.936948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:53.405 [2024-11-04 16:21:11.936965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:53.405 [2024-11-04 16:21:11.936977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.405 [2024-11-04 16:21:11.937003] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:53.405 [2024-11-04 16:21:11.937911] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:53.405 [2024-11-04 16:21:11.937935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.405 [2024-11-04 16:21:11.937947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:53.405 [2024-11-04 16:21:11.937961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:26:53.405 [2024-11-04 16:21:11.937973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.405 [2024-11-04 16:21:11.939477] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:53.405 [2024-11-04 16:21:11.957621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.405 [2024-11-04 16:21:11.957665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:53.405 [2024-11-04 16:21:11.957682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.175 ms 00:26:53.405 [2024-11-04 16:21:11.957694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.405 [2024-11-04 16:21:11.957792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.405 [2024-11-04 16:21:11.957807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:53.405 [2024-11-04 16:21:11.957821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:53.405 [2024-11-04 16:21:11.957832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.405 [2024-11-04 16:21:11.964802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.405 [2024-11-04 16:21:11.964834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:53.405 [2024-11-04 16:21:11.964846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.902 ms 00:26:53.405 [2024-11-04 16:21:11.964857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.405 [2024-11-04 16:21:11.964938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.405 [2024-11-04 16:21:11.964953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:53.405 [2024-11-04 16:21:11.964964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:53.405 [2024-11-04 16:21:11.964975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.405 [2024-11-04 16:21:11.965018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.405 [2024-11-04 16:21:11.965030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:53.405 [2024-11-04 16:21:11.965042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:53.405 [2024-11-04 16:21:11.965053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.405 [2024-11-04 16:21:11.965079] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:53.405 [2024-11-04 16:21:11.969769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.405 [2024-11-04 16:21:11.969804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:53.405 [2024-11-04 16:21:11.969817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.703 ms 00:26:53.405 [2024-11-04 16:21:11.969833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.405 [2024-11-04 16:21:11.969865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.405 [2024-11-04 16:21:11.969877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:53.405 [2024-11-04 16:21:11.969889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:53.405 [2024-11-04 16:21:11.969899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.405 [2024-11-04 16:21:11.969957] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:53.405 [2024-11-04 16:21:11.969982] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:53.405 [2024-11-04 16:21:11.970016] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:53.405 [2024-11-04 16:21:11.970037] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:53.405 [2024-11-04 16:21:11.970122] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:53.405 [2024-11-04 16:21:11.970137] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:53.405 [2024-11-04 16:21:11.970152] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:53.405 [2024-11-04 16:21:11.970166] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:53.405 [2024-11-04 16:21:11.970179] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:53.405 [2024-11-04 16:21:11.970191] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:53.405 [2024-11-04 16:21:11.970202] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:53.405 [2024-11-04 16:21:11.970213] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:53.405 [2024-11-04 16:21:11.970224] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:53.405 [2024-11-04 16:21:11.970240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.405 [2024-11-04 16:21:11.970251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:53.405 [2024-11-04 16:21:11.970262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:26:53.405 [2024-11-04 16:21:11.970273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.405 [2024-11-04 16:21:11.970341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.405 [2024-11-04 16:21:11.970353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:53.405 [2024-11-04 16:21:11.970365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:53.405 [2024-11-04 16:21:11.970376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.405 [2024-11-04 16:21:11.970470] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:53.405 [2024-11-04 16:21:11.970491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:53.405 [2024-11-04 16:21:11.970503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:53.405 [2024-11-04 16:21:11.970515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.405 [2024-11-04 16:21:11.970526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:53.405 [2024-11-04 16:21:11.970537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:53.405 [2024-11-04 16:21:11.970548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:53.405 [2024-11-04 16:21:11.970559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:53.405 [2024-11-04 16:21:11.970570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:53.405 [2024-11-04 16:21:11.970580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:53.405 [2024-11-04 16:21:11.970590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:53.405 [2024-11-04 16:21:11.970612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:53.405 [2024-11-04 16:21:11.970640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:53.405 [2024-11-04 16:21:11.970651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:53.405 [2024-11-04 16:21:11.970662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:53.406 [2024-11-04 16:21:11.970683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.406 [2024-11-04 16:21:11.970694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:53.406 [2024-11-04 16:21:11.970706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:53.406 [2024-11-04 16:21:11.970716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.406 [2024-11-04 16:21:11.970727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:53.406 [2024-11-04 16:21:11.970738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:53.406 [2024-11-04 16:21:11.970748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.406 [2024-11-04 16:21:11.970758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:53.406 [2024-11-04 16:21:11.971004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:53.406 [2024-11-04 16:21:11.971042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.406 [2024-11-04 16:21:11.971078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:53.406 [2024-11-04 16:21:11.971114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:53.406 [2024-11-04 16:21:11.971149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.406 [2024-11-04 16:21:11.971182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:53.406 [2024-11-04 16:21:11.971216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:53.406 [2024-11-04 16:21:11.971310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:53.406 [2024-11-04 16:21:11.971350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:53.406 [2024-11-04 16:21:11.971384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:53.406 [2024-11-04 16:21:11.971419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:53.406 [2024-11-04 16:21:11.971452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:53.406 [2024-11-04 16:21:11.971486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:53.406 [2024-11-04 16:21:11.971519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:53.406 [2024-11-04 16:21:11.971554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:53.406 [2024-11-04 16:21:11.971633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:53.406 [2024-11-04 16:21:11.971671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.406 [2024-11-04 16:21:11.971705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:53.406 [2024-11-04 16:21:11.971739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:53.406 [2024-11-04 16:21:11.971800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.406 [2024-11-04 16:21:11.971837] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:53.406 [2024-11-04 16:21:11.971926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:53.406 [2024-11-04 16:21:11.971966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:53.406 [2024-11-04 16:21:11.972001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:53.406 [2024-11-04 16:21:11.972036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:53.406 [2024-11-04 16:21:11.972071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:53.406 [2024-11-04 16:21:11.972105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:53.406 [2024-11-04 16:21:11.972139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:53.406 [2024-11-04 16:21:11.972281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:53.406 [2024-11-04 16:21:11.972315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:53.406 [2024-11-04 16:21:11.972351] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:53.406 [2024-11-04 16:21:11.972368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:53.406 [2024-11-04 16:21:11.972382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:53.406 [2024-11-04 16:21:11.972395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:53.406 [2024-11-04 16:21:11.972407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:53.406 [2024-11-04 16:21:11.972419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:53.406 [2024-11-04 16:21:11.972431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:53.406 [2024-11-04 16:21:11.972443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:53.406 [2024-11-04 16:21:11.972454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:53.406 [2024-11-04 16:21:11.972466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:53.406 [2024-11-04 16:21:11.972478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:53.406 [2024-11-04 16:21:11.972490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:53.406 [2024-11-04 16:21:11.972502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:53.406 [2024-11-04 16:21:11.972514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:53.406 [2024-11-04 16:21:11.972526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:53.406 [2024-11-04 16:21:11.972538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:53.406 [2024-11-04 16:21:11.972550] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:53.406 [2024-11-04 16:21:11.972570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:53.406 [2024-11-04 16:21:11.972583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:53.406 [2024-11-04 16:21:11.972595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:53.406 [2024-11-04 16:21:11.972608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:53.406 [2024-11-04 16:21:11.972620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:53.406 [2024-11-04 16:21:11.972635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.406 [2024-11-04 16:21:11.972647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:53.406 [2024-11-04 16:21:11.972660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:26:53.406 [2024-11-04 16:21:11.972672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.406 [2024-11-04 16:21:12.012732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.406 [2024-11-04 16:21:12.012812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:53.406 [2024-11-04 16:21:12.012829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.063 ms 00:26:53.406 [2024-11-04 16:21:12.012842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.406 [2024-11-04 16:21:12.012928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.406 [2024-11-04 16:21:12.012941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:53.406 [2024-11-04 16:21:12.012954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:53.406 [2024-11-04 16:21:12.012965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.406 [2024-11-04 16:21:12.082683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.406 [2024-11-04 16:21:12.082722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:53.406 [2024-11-04 16:21:12.082737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.748 ms 00:26:53.406 [2024-11-04 16:21:12.082777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.406 [2024-11-04 16:21:12.082819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.406 [2024-11-04 16:21:12.082832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:53.406 [2024-11-04 16:21:12.082844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:53.406 [2024-11-04 16:21:12.082862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.406 [2024-11-04 16:21:12.083383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.406 [2024-11-04 16:21:12.083405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:53.406 [2024-11-04 16:21:12.083418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:26:53.406 [2024-11-04 16:21:12.083430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.406 [2024-11-04 16:21:12.083551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.406 [2024-11-04 16:21:12.083567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:53.406 [2024-11-04 16:21:12.083579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:26:53.406 [2024-11-04 16:21:12.083598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.406 [2024-11-04 16:21:12.103130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.406 [2024-11-04 16:21:12.103316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:53.406 [2024-11-04 16:21:12.103346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.539 ms 00:26:53.406 [2024-11-04 16:21:12.103360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.406 [2024-11-04 16:21:12.122014] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:53.406 [2024-11-04 16:21:12.122159] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:53.406 [2024-11-04 16:21:12.122196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.406 [2024-11-04 16:21:12.122210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:53.406 [2024-11-04 16:21:12.122223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.738 ms 00:26:53.406 [2024-11-04 16:21:12.122235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.666 [2024-11-04 16:21:12.150934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.666 [2024-11-04 16:21:12.150984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:53.666 [2024-11-04 16:21:12.150999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.659 ms 00:26:53.666 [2024-11-04 16:21:12.151027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.666 [2024-11-04 16:21:12.168323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.666 [2024-11-04 16:21:12.168491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:53.666 [2024-11-04 16:21:12.168620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.276 ms 00:26:53.666 [2024-11-04 16:21:12.168663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.666 [2024-11-04 16:21:12.186385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.666 [2024-11-04 16:21:12.186560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:53.666 [2024-11-04 16:21:12.186774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.687 ms 00:26:53.666 [2024-11-04 16:21:12.186818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.666 [2024-11-04 16:21:12.187510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.666 [2024-11-04 16:21:12.187649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:53.666 [2024-11-04 16:21:12.187733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:26:53.666 [2024-11-04 16:21:12.187776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.666 [2024-11-04 16:21:12.273617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.666 [2024-11-04 16:21:12.273681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:53.666 [2024-11-04 16:21:12.273723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.948 ms 00:26:53.666 [2024-11-04 16:21:12.273736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.666 [2024-11-04 16:21:12.284425] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:53.666 [2024-11-04 16:21:12.287324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.666 [2024-11-04 16:21:12.287361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:53.666 [2024-11-04 16:21:12.287377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.542 ms 00:26:53.666 [2024-11-04 16:21:12.287389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.666 [2024-11-04 16:21:12.287482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.666 [2024-11-04 16:21:12.287496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:53.666 [2024-11-04 16:21:12.287509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:53.666 [2024-11-04 16:21:12.287526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.666 [2024-11-04 16:21:12.289186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.666 [2024-11-04 16:21:12.289329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:53.666 [2024-11-04 16:21:12.289410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:26:53.666 [2024-11-04 16:21:12.289449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.666 [2024-11-04 16:21:12.289515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.666 [2024-11-04 16:21:12.289552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:53.666 [2024-11-04 16:21:12.289568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:53.666 [2024-11-04 16:21:12.289580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.666 [2024-11-04 16:21:12.289623] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:53.666 [2024-11-04 16:21:12.289641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.666 [2024-11-04 16:21:12.289654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:53.666 [2024-11-04 16:21:12.289667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:53.666 [2024-11-04 16:21:12.289679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.666 [2024-11-04 16:21:12.324988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.666 [2024-11-04 16:21:12.325031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:53.666 [2024-11-04 16:21:12.325047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.339 ms 00:26:53.666 [2024-11-04 16:21:12.325082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.666 [2024-11-04 16:21:12.325161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.666 [2024-11-04 16:21:12.325176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:53.666 [2024-11-04 16:21:12.325189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:53.666 [2024-11-04 16:21:12.325200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.666 [2024-11-04 16:21:12.326324] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.916 ms, result 0 00:26:55.044  [2024-11-04T16:21:14.702Z] Copying: 1288/1048576 [kB] (1288 kBps) [2024-11-04T16:21:15.638Z] Copying: 9140/1048576 [kB] (7852 kBps) [2024-11-04T16:21:16.609Z] Copying: 39/1024 [MB] (30 MBps) [2024-11-04T16:21:17.546Z] Copying: 70/1024 [MB] (31 MBps) [2024-11-04T16:21:18.931Z] Copying: 101/1024 [MB] (31 MBps) [2024-11-04T16:21:19.872Z] Copying: 132/1024 [MB] (31 MBps) [2024-11-04T16:21:20.808Z] Copying: 162/1024 [MB] (30 MBps) [2024-11-04T16:21:21.745Z] Copying: 196/1024 [MB] (33 MBps) [2024-11-04T16:21:22.683Z] Copying: 229/1024 [MB] (33 MBps) [2024-11-04T16:21:23.621Z] Copying: 262/1024 [MB] (33 MBps) [2024-11-04T16:21:24.559Z] Copying: 296/1024 [MB] (33 MBps) [2024-11-04T16:21:25.937Z] Copying: 330/1024 [MB] (34 MBps) [2024-11-04T16:21:26.874Z] Copying: 363/1024 [MB] (33 MBps) [2024-11-04T16:21:27.824Z] Copying: 397/1024 [MB] (33 MBps) [2024-11-04T16:21:28.775Z] Copying: 432/1024 [MB] (34 MBps) [2024-11-04T16:21:29.711Z] Copying: 466/1024 [MB] (34 MBps) [2024-11-04T16:21:30.648Z] Copying: 500/1024 [MB] (33 MBps) [2024-11-04T16:21:31.584Z] Copying: 535/1024 [MB] (35 MBps) [2024-11-04T16:21:32.521Z] Copying: 569/1024 [MB] (33 MBps) [2024-11-04T16:21:33.899Z] Copying: 603/1024 [MB] (34 MBps) [2024-11-04T16:21:34.836Z] Copying: 639/1024 [MB] (35 MBps) [2024-11-04T16:21:35.772Z] Copying: 673/1024 [MB] (34 MBps) [2024-11-04T16:21:36.709Z] Copying: 705/1024 [MB] (32 MBps) [2024-11-04T16:21:37.646Z] Copying: 739/1024 [MB] (33 MBps) [2024-11-04T16:21:38.583Z] Copying: 773/1024 [MB] (33 MBps) [2024-11-04T16:21:39.524Z] Copying: 809/1024 [MB] (35 MBps) [2024-11-04T16:21:40.900Z] Copying: 844/1024 [MB] (35 MBps) [2024-11-04T16:21:41.836Z] Copying: 878/1024 [MB] (33 MBps) [2024-11-04T16:21:42.773Z] Copying: 912/1024 [MB] (34 MBps) [2024-11-04T16:21:43.709Z] Copying: 946/1024 [MB] (33 MBps) [2024-11-04T16:21:44.647Z] Copying: 980/1024 [MB] (34 MBps) [2024-11-04T16:21:44.906Z] Copying: 1014/1024 [MB] (34 MBps) [2024-11-04T16:21:46.285Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-04 16:21:46.260407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.563 [2024-11-04 16:21:46.260478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:27.563 [2024-11-04 16:21:46.260508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:27.563 [2024-11-04 16:21:46.260523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.563 [2024-11-04 16:21:46.260555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:27.563 [2024-11-04 16:21:46.267062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.563 [2024-11-04 16:21:46.267103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:27.563 [2024-11-04 16:21:46.267120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.491 ms 00:27:27.563 [2024-11-04 16:21:46.267135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.563 [2024-11-04 16:21:46.267413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.563 [2024-11-04 16:21:46.267430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:27.563 [2024-11-04 16:21:46.267450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:27:27.563 [2024-11-04 16:21:46.267465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.563 [2024-11-04 16:21:46.277673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.563 [2024-11-04 16:21:46.277837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:27.563 [2024-11-04 16:21:46.277933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.199 ms 00:27:27.563 [2024-11-04 16:21:46.278021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.563 [2024-11-04 16:21:46.283024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.563 [2024-11-04 16:21:46.283159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:27.563 [2024-11-04 16:21:46.283285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.930 ms 00:27:27.563 [2024-11-04 16:21:46.283331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.823 [2024-11-04 16:21:46.317816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.823 [2024-11-04 16:21:46.317982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:27.823 [2024-11-04 16:21:46.318099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.453 ms 00:27:27.824 [2024-11-04 16:21:46.318137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.824 [2024-11-04 16:21:46.338477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.824 [2024-11-04 16:21:46.338635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:27.824 [2024-11-04 16:21:46.338767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.317 ms 00:27:27.824 [2024-11-04 16:21:46.338810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.824 [2024-11-04 16:21:46.340894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.824 [2024-11-04 16:21:46.341025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:27.824 [2024-11-04 16:21:46.341115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.007 ms 00:27:27.824 [2024-11-04 16:21:46.341154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.824 [2024-11-04 16:21:46.374956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.824 [2024-11-04 16:21:46.375094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:27.824 [2024-11-04 16:21:46.375182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.807 ms 00:27:27.824 [2024-11-04 16:21:46.375216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.824 [2024-11-04 16:21:46.408785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.824 [2024-11-04 16:21:46.408910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:27.824 [2024-11-04 16:21:46.409008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.546 ms 00:27:27.824 [2024-11-04 16:21:46.409042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.824 [2024-11-04 16:21:46.442941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.824 [2024-11-04 16:21:46.443063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:27.824 [2024-11-04 16:21:46.443146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.897 ms 00:27:27.824 [2024-11-04 16:21:46.443180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.824 [2024-11-04 16:21:46.476524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.824 [2024-11-04 16:21:46.476681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:27.824 [2024-11-04 16:21:46.476790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.289 ms 00:27:27.824 [2024-11-04 16:21:46.476829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.824 [2024-11-04 16:21:46.476883] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:27.824 [2024-11-04 16:21:46.476933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:27.824 [2024-11-04 16:21:46.477031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:27.824 [2024-11-04 16:21:46.477083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.477990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.478000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.478017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.478027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:27.824 [2024-11-04 16:21:46.478038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:27.825 [2024-11-04 16:21:46.478379] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:27.825 [2024-11-04 16:21:46.478389] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ec1fce4-e67d-4e51-9974-2fdcba28edba 00:27:27.825 [2024-11-04 16:21:46.478400] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:27.825 [2024-11-04 16:21:46.478410] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 157120 00:27:27.825 [2024-11-04 16:21:46.478419] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 155136 00:27:27.825 [2024-11-04 16:21:46.478433] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0128 00:27:27.825 [2024-11-04 16:21:46.478443] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:27.825 [2024-11-04 16:21:46.478453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:27.825 [2024-11-04 16:21:46.478463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:27.825 [2024-11-04 16:21:46.478481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:27.825 [2024-11-04 16:21:46.478501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:27.825 [2024-11-04 16:21:46.478511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.825 [2024-11-04 16:21:46.478521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:27.825 [2024-11-04 16:21:46.478531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:27:27.825 [2024-11-04 16:21:46.478541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.825 [2024-11-04 16:21:46.497691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.825 [2024-11-04 16:21:46.497728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:27.825 [2024-11-04 16:21:46.497740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.142 ms 00:27:27.825 [2024-11-04 16:21:46.497764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.825 [2024-11-04 16:21:46.498316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.825 [2024-11-04 16:21:46.498339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:27.825 [2024-11-04 16:21:46.498350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:27:27.825 [2024-11-04 16:21:46.498360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.085 [2024-11-04 16:21:46.546162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.085 [2024-11-04 16:21:46.546287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:28.085 [2024-11-04 16:21:46.546307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.085 [2024-11-04 16:21:46.546316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.085 [2024-11-04 16:21:46.546382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.085 [2024-11-04 16:21:46.546393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:28.085 [2024-11-04 16:21:46.546402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.085 [2024-11-04 16:21:46.546412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.085 [2024-11-04 16:21:46.546475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.085 [2024-11-04 16:21:46.546494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:28.085 [2024-11-04 16:21:46.546504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.085 [2024-11-04 16:21:46.546514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.085 [2024-11-04 16:21:46.546530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.085 [2024-11-04 16:21:46.546540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:28.085 [2024-11-04 16:21:46.546550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.085 [2024-11-04 16:21:46.546560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.085 [2024-11-04 16:21:46.663637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.085 [2024-11-04 16:21:46.663685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:28.085 [2024-11-04 16:21:46.663698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.085 [2024-11-04 16:21:46.663709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.085 [2024-11-04 16:21:46.759677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.085 [2024-11-04 16:21:46.759871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:28.085 [2024-11-04 16:21:46.759894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.085 [2024-11-04 16:21:46.759905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.085 [2024-11-04 16:21:46.759988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.085 [2024-11-04 16:21:46.760000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:28.085 [2024-11-04 16:21:46.760016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.085 [2024-11-04 16:21:46.760026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.085 [2024-11-04 16:21:46.760062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.085 [2024-11-04 16:21:46.760073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:28.085 [2024-11-04 16:21:46.760083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.085 [2024-11-04 16:21:46.760093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.085 [2024-11-04 16:21:46.760205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.085 [2024-11-04 16:21:46.760218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:28.085 [2024-11-04 16:21:46.760229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.085 [2024-11-04 16:21:46.760244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.085 [2024-11-04 16:21:46.760278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.085 [2024-11-04 16:21:46.760290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:28.085 [2024-11-04 16:21:46.760300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.085 [2024-11-04 16:21:46.760310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.085 [2024-11-04 16:21:46.760346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.085 [2024-11-04 16:21:46.760357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:28.085 [2024-11-04 16:21:46.760367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.085 [2024-11-04 16:21:46.760381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.085 [2024-11-04 16:21:46.760421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:28.085 [2024-11-04 16:21:46.760433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:28.085 [2024-11-04 16:21:46.760443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:28.085 [2024-11-04 16:21:46.760453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.085 [2024-11-04 16:21:46.760567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 500.948 ms, result 0 00:27:29.021 00:27:29.021 00:27:29.280 16:21:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:30.659 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:30.659 16:21:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:30.918 [2024-11-04 16:21:49.458194] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:27:30.918 [2024-11-04 16:21:49.458305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80237 ] 00:27:30.918 [2024-11-04 16:21:49.635758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.177 [2024-11-04 16:21:49.747967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.436 [2024-11-04 16:21:50.096969] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:31.436 [2024-11-04 16:21:50.097036] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:31.728 [2024-11-04 16:21:50.257269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.728 [2024-11-04 16:21:50.257317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:31.728 [2024-11-04 16:21:50.257337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:31.728 [2024-11-04 16:21:50.257347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.728 [2024-11-04 16:21:50.257392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.728 [2024-11-04 16:21:50.257403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:31.728 [2024-11-04 16:21:50.257416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:31.728 [2024-11-04 16:21:50.257425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.728 [2024-11-04 16:21:50.257445] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:31.728 [2024-11-04 16:21:50.258444] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:31.728 [2024-11-04 16:21:50.258472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.728 [2024-11-04 16:21:50.258483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:31.728 [2024-11-04 16:21:50.258494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:27:31.728 [2024-11-04 16:21:50.258504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.728 [2024-11-04 16:21:50.259960] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:31.728 [2024-11-04 16:21:50.277947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.728 [2024-11-04 16:21:50.277986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:31.728 [2024-11-04 16:21:50.278001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.017 ms 00:27:31.728 [2024-11-04 16:21:50.278011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.728 [2024-11-04 16:21:50.278071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.728 [2024-11-04 16:21:50.278083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:31.728 [2024-11-04 16:21:50.278093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:31.728 [2024-11-04 16:21:50.278103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.728 [2024-11-04 16:21:50.284989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.728 [2024-11-04 16:21:50.285170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:31.728 [2024-11-04 16:21:50.285192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.830 ms 00:27:31.728 [2024-11-04 16:21:50.285202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.728 [2024-11-04 16:21:50.285291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.728 [2024-11-04 16:21:50.285304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:31.728 [2024-11-04 16:21:50.285315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:31.728 [2024-11-04 16:21:50.285325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.728 [2024-11-04 16:21:50.285367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.728 [2024-11-04 16:21:50.285378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:31.728 [2024-11-04 16:21:50.285389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:31.728 [2024-11-04 16:21:50.285398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.728 [2024-11-04 16:21:50.285421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:31.728 [2024-11-04 16:21:50.290223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.728 [2024-11-04 16:21:50.290255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:31.728 [2024-11-04 16:21:50.290267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.814 ms 00:27:31.728 [2024-11-04 16:21:50.290297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.728 [2024-11-04 16:21:50.290327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.728 [2024-11-04 16:21:50.290337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:31.728 [2024-11-04 16:21:50.290348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:31.728 [2024-11-04 16:21:50.290357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.728 [2024-11-04 16:21:50.290410] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:31.728 [2024-11-04 16:21:50.290432] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:31.729 [2024-11-04 16:21:50.290466] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:31.729 [2024-11-04 16:21:50.290487] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:31.729 [2024-11-04 16:21:50.290573] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:31.729 [2024-11-04 16:21:50.290586] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:31.729 [2024-11-04 16:21:50.290608] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:31.729 [2024-11-04 16:21:50.290620] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:31.729 [2024-11-04 16:21:50.290633] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:31.729 [2024-11-04 16:21:50.290644] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:31.729 [2024-11-04 16:21:50.290654] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:31.729 [2024-11-04 16:21:50.290663] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:31.729 [2024-11-04 16:21:50.290673] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:31.729 [2024-11-04 16:21:50.290687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.729 [2024-11-04 16:21:50.290698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:31.729 [2024-11-04 16:21:50.290708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:27:31.729 [2024-11-04 16:21:50.290718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.729 [2024-11-04 16:21:50.290806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.729 [2024-11-04 16:21:50.290818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:31.729 [2024-11-04 16:21:50.290828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:31.729 [2024-11-04 16:21:50.290837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.729 [2024-11-04 16:21:50.290944] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:31.729 [2024-11-04 16:21:50.290963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:31.729 [2024-11-04 16:21:50.290974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:31.729 [2024-11-04 16:21:50.290984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.729 [2024-11-04 16:21:50.290995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:31.729 [2024-11-04 16:21:50.291005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:31.729 [2024-11-04 16:21:50.291014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:31.729 [2024-11-04 16:21:50.291026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:31.729 [2024-11-04 16:21:50.291036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:31.729 [2024-11-04 16:21:50.291045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:31.729 [2024-11-04 16:21:50.291054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:31.729 [2024-11-04 16:21:50.291064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:31.729 [2024-11-04 16:21:50.291073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:31.729 [2024-11-04 16:21:50.291082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:31.729 [2024-11-04 16:21:50.291092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:31.729 [2024-11-04 16:21:50.291110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.729 [2024-11-04 16:21:50.291120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:31.729 [2024-11-04 16:21:50.291130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:31.729 [2024-11-04 16:21:50.291139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.729 [2024-11-04 16:21:50.291149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:31.729 [2024-11-04 16:21:50.291158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:31.729 [2024-11-04 16:21:50.291167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.729 [2024-11-04 16:21:50.291176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:31.729 [2024-11-04 16:21:50.291185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:31.729 [2024-11-04 16:21:50.291194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.729 [2024-11-04 16:21:50.291203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:31.729 [2024-11-04 16:21:50.291212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:31.729 [2024-11-04 16:21:50.291221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.729 [2024-11-04 16:21:50.291230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:31.729 [2024-11-04 16:21:50.291239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:31.729 [2024-11-04 16:21:50.291248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:31.729 [2024-11-04 16:21:50.291257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:31.729 [2024-11-04 16:21:50.291266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:31.729 [2024-11-04 16:21:50.291275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:31.729 [2024-11-04 16:21:50.291283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:31.729 [2024-11-04 16:21:50.291293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:31.729 [2024-11-04 16:21:50.291302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:31.729 [2024-11-04 16:21:50.291311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:31.729 [2024-11-04 16:21:50.291320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:31.729 [2024-11-04 16:21:50.291329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.729 [2024-11-04 16:21:50.291338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:31.729 [2024-11-04 16:21:50.291347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:31.729 [2024-11-04 16:21:50.291356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.729 [2024-11-04 16:21:50.291365] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:31.729 [2024-11-04 16:21:50.291375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:31.729 [2024-11-04 16:21:50.291385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:31.729 [2024-11-04 16:21:50.291394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:31.729 [2024-11-04 16:21:50.291404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:31.729 [2024-11-04 16:21:50.291414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:31.729 [2024-11-04 16:21:50.291423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:31.729 [2024-11-04 16:21:50.291432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:31.729 [2024-11-04 16:21:50.291441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:31.729 [2024-11-04 16:21:50.291450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:31.729 [2024-11-04 16:21:50.291461] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:31.729 [2024-11-04 16:21:50.291473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:31.729 [2024-11-04 16:21:50.291484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:31.729 [2024-11-04 16:21:50.291494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:31.729 [2024-11-04 16:21:50.291504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:31.729 [2024-11-04 16:21:50.291514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:31.729 [2024-11-04 16:21:50.291525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:31.729 [2024-11-04 16:21:50.291535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:31.729 [2024-11-04 16:21:50.291545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:31.729 [2024-11-04 16:21:50.291555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:31.729 [2024-11-04 16:21:50.291566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:31.729 [2024-11-04 16:21:50.291576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:31.729 [2024-11-04 16:21:50.291586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:31.729 [2024-11-04 16:21:50.291596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:31.729 [2024-11-04 16:21:50.291606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:31.729 [2024-11-04 16:21:50.291616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:31.729 [2024-11-04 16:21:50.291627] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:31.729 [2024-11-04 16:21:50.291642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:31.729 [2024-11-04 16:21:50.291653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:31.729 [2024-11-04 16:21:50.291664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:31.729 [2024-11-04 16:21:50.291674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:31.729 [2024-11-04 16:21:50.291684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:31.729 [2024-11-04 16:21:50.291695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.729 [2024-11-04 16:21:50.291705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:31.729 [2024-11-04 16:21:50.291716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:27:31.729 [2024-11-04 16:21:50.291726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.729 [2024-11-04 16:21:50.331153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.730 [2024-11-04 16:21:50.331191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:31.730 [2024-11-04 16:21:50.331205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.436 ms 00:27:31.730 [2024-11-04 16:21:50.331216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.730 [2024-11-04 16:21:50.331312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.730 [2024-11-04 16:21:50.331323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:31.730 [2024-11-04 16:21:50.331334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:31.730 [2024-11-04 16:21:50.331344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.730 [2024-11-04 16:21:50.386003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.730 [2024-11-04 16:21:50.386036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:31.730 [2024-11-04 16:21:50.386049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.690 ms 00:27:31.730 [2024-11-04 16:21:50.386059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.730 [2024-11-04 16:21:50.386093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.730 [2024-11-04 16:21:50.386103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:31.730 [2024-11-04 16:21:50.386113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:31.730 [2024-11-04 16:21:50.386127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.730 [2024-11-04 16:21:50.386627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.730 [2024-11-04 16:21:50.386641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:31.730 [2024-11-04 16:21:50.386653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:27:31.730 [2024-11-04 16:21:50.386662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.730 [2024-11-04 16:21:50.386809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.730 [2024-11-04 16:21:50.386824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:31.730 [2024-11-04 16:21:50.386834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:27:31.730 [2024-11-04 16:21:50.386850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.730 [2024-11-04 16:21:50.406076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.730 [2024-11-04 16:21:50.406108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:31.730 [2024-11-04 16:21:50.406124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.237 ms 00:27:31.730 [2024-11-04 16:21:50.406134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.730 [2024-11-04 16:21:50.424223] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:31.730 [2024-11-04 16:21:50.424261] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:31.730 [2024-11-04 16:21:50.424275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.730 [2024-11-04 16:21:50.424302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:31.730 [2024-11-04 16:21:50.424313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.079 ms 00:27:31.730 [2024-11-04 16:21:50.424323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.990 [2024-11-04 16:21:50.452608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.990 [2024-11-04 16:21:50.452652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:31.990 [2024-11-04 16:21:50.452665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.290 ms 00:27:31.990 [2024-11-04 16:21:50.452674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.990 [2024-11-04 16:21:50.469551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.990 [2024-11-04 16:21:50.469588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:31.990 [2024-11-04 16:21:50.469600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.863 ms 00:27:31.990 [2024-11-04 16:21:50.469610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.990 [2024-11-04 16:21:50.486779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.990 [2024-11-04 16:21:50.486822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:31.990 [2024-11-04 16:21:50.486837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.145 ms 00:27:31.990 [2024-11-04 16:21:50.486847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.990 [2024-11-04 16:21:50.487597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.990 [2024-11-04 16:21:50.487622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:31.990 [2024-11-04 16:21:50.487634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:27:31.990 [2024-11-04 16:21:50.487648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.990 [2024-11-04 16:21:50.568231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.990 [2024-11-04 16:21:50.568290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:31.990 [2024-11-04 16:21:50.568312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.693 ms 00:27:31.990 [2024-11-04 16:21:50.568322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.990 [2024-11-04 16:21:50.578371] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:31.990 [2024-11-04 16:21:50.580835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.990 [2024-11-04 16:21:50.580955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:31.990 [2024-11-04 16:21:50.581041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.489 ms 00:27:31.990 [2024-11-04 16:21:50.581078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.990 [2024-11-04 16:21:50.581179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.990 [2024-11-04 16:21:50.581216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:31.990 [2024-11-04 16:21:50.581246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:31.990 [2024-11-04 16:21:50.581281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.990 [2024-11-04 16:21:50.582239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.990 [2024-11-04 16:21:50.582268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:31.990 [2024-11-04 16:21:50.582280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:27:31.990 [2024-11-04 16:21:50.582291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.990 [2024-11-04 16:21:50.582318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.990 [2024-11-04 16:21:50.582330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:31.990 [2024-11-04 16:21:50.582341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:31.990 [2024-11-04 16:21:50.582351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.990 [2024-11-04 16:21:50.582384] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:31.990 [2024-11-04 16:21:50.582400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.990 [2024-11-04 16:21:50.582411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:31.990 [2024-11-04 16:21:50.582421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:31.990 [2024-11-04 16:21:50.582431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.990 [2024-11-04 16:21:50.616564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.990 [2024-11-04 16:21:50.616613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:31.990 [2024-11-04 16:21:50.616626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.169 ms 00:27:31.990 [2024-11-04 16:21:50.616643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.990 [2024-11-04 16:21:50.616711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.990 [2024-11-04 16:21:50.616723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:31.990 [2024-11-04 16:21:50.616733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:31.990 [2024-11-04 16:21:50.616742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.990 [2024-11-04 16:21:50.617881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 360.742 ms, result 0 00:27:33.369  [2024-11-04T16:21:53.028Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-04T16:21:53.964Z] Copying: 51/1024 [MB] (25 MBps) [2024-11-04T16:21:54.901Z] Copying: 77/1024 [MB] (26 MBps) [2024-11-04T16:21:55.837Z] Copying: 103/1024 [MB] (26 MBps) [2024-11-04T16:21:57.216Z] Copying: 130/1024 [MB] (26 MBps) [2024-11-04T16:21:58.152Z] Copying: 156/1024 [MB] (26 MBps) [2024-11-04T16:21:59.089Z] Copying: 182/1024 [MB] (26 MBps) [2024-11-04T16:22:00.025Z] Copying: 208/1024 [MB] (26 MBps) [2024-11-04T16:22:00.962Z] Copying: 234/1024 [MB] (26 MBps) [2024-11-04T16:22:01.902Z] Copying: 260/1024 [MB] (25 MBps) [2024-11-04T16:22:02.840Z] Copying: 286/1024 [MB] (25 MBps) [2024-11-04T16:22:04.216Z] Copying: 314/1024 [MB] (27 MBps) [2024-11-04T16:22:05.153Z] Copying: 341/1024 [MB] (26 MBps) [2024-11-04T16:22:06.089Z] Copying: 367/1024 [MB] (26 MBps) [2024-11-04T16:22:07.025Z] Copying: 393/1024 [MB] (25 MBps) [2024-11-04T16:22:07.962Z] Copying: 419/1024 [MB] (26 MBps) [2024-11-04T16:22:08.899Z] Copying: 446/1024 [MB] (26 MBps) [2024-11-04T16:22:09.843Z] Copying: 473/1024 [MB] (27 MBps) [2024-11-04T16:22:10.798Z] Copying: 499/1024 [MB] (26 MBps) [2024-11-04T16:22:12.176Z] Copying: 526/1024 [MB] (26 MBps) [2024-11-04T16:22:13.115Z] Copying: 552/1024 [MB] (26 MBps) [2024-11-04T16:22:14.054Z] Copying: 579/1024 [MB] (26 MBps) [2024-11-04T16:22:14.992Z] Copying: 605/1024 [MB] (26 MBps) [2024-11-04T16:22:15.931Z] Copying: 631/1024 [MB] (26 MBps) [2024-11-04T16:22:16.868Z] Copying: 657/1024 [MB] (25 MBps) [2024-11-04T16:22:17.805Z] Copying: 683/1024 [MB] (26 MBps) [2024-11-04T16:22:19.184Z] Copying: 711/1024 [MB] (27 MBps) [2024-11-04T16:22:20.120Z] Copying: 738/1024 [MB] (27 MBps) [2024-11-04T16:22:21.057Z] Copying: 763/1024 [MB] (25 MBps) [2024-11-04T16:22:21.994Z] Copying: 789/1024 [MB] (26 MBps) [2024-11-04T16:22:22.931Z] Copying: 814/1024 [MB] (24 MBps) [2024-11-04T16:22:23.903Z] Copying: 839/1024 [MB] (24 MBps) [2024-11-04T16:22:24.840Z] Copying: 864/1024 [MB] (25 MBps) [2024-11-04T16:22:25.777Z] Copying: 889/1024 [MB] (24 MBps) [2024-11-04T16:22:27.154Z] Copying: 915/1024 [MB] (25 MBps) [2024-11-04T16:22:28.090Z] Copying: 939/1024 [MB] (24 MBps) [2024-11-04T16:22:29.028Z] Copying: 965/1024 [MB] (25 MBps) [2024-11-04T16:22:29.964Z] Copying: 991/1024 [MB] (25 MBps) [2024-11-04T16:22:30.223Z] Copying: 1017/1024 [MB] (26 MBps) [2024-11-04T16:22:30.223Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-04 16:22:30.063087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.501 [2024-11-04 16:22:30.063145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:11.501 [2024-11-04 16:22:30.063161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:11.501 [2024-11-04 16:22:30.063171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.501 [2024-11-04 16:22:30.063208] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:11.501 [2024-11-04 16:22:30.068754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.501 [2024-11-04 16:22:30.068798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:11.501 [2024-11-04 16:22:30.068821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.528 ms 00:28:11.501 [2024-11-04 16:22:30.068835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.501 [2024-11-04 16:22:30.069078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.501 [2024-11-04 16:22:30.069094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:11.501 [2024-11-04 16:22:30.069108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:28:11.501 [2024-11-04 16:22:30.069121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.501 [2024-11-04 16:22:30.072703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.501 [2024-11-04 16:22:30.072735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:11.501 [2024-11-04 16:22:30.072764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.568 ms 00:28:11.501 [2024-11-04 16:22:30.072778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.501 [2024-11-04 16:22:30.078833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.501 [2024-11-04 16:22:30.078866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:11.501 [2024-11-04 16:22:30.078877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.037 ms 00:28:11.501 [2024-11-04 16:22:30.078887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.501 [2024-11-04 16:22:30.113639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.501 [2024-11-04 16:22:30.113675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:11.501 [2024-11-04 16:22:30.113687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.750 ms 00:28:11.501 [2024-11-04 16:22:30.113697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.501 [2024-11-04 16:22:30.134037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.501 [2024-11-04 16:22:30.134088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:11.501 [2024-11-04 16:22:30.134101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.338 ms 00:28:11.502 [2024-11-04 16:22:30.134111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.502 [2024-11-04 16:22:30.136409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.502 [2024-11-04 16:22:30.136559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:11.502 [2024-11-04 16:22:30.136581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:28:11.502 [2024-11-04 16:22:30.136592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.502 [2024-11-04 16:22:30.170462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.502 [2024-11-04 16:22:30.170497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:11.502 [2024-11-04 16:22:30.170509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.904 ms 00:28:11.502 [2024-11-04 16:22:30.170534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.502 [2024-11-04 16:22:30.204300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.502 [2024-11-04 16:22:30.204347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:11.502 [2024-11-04 16:22:30.204359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.785 ms 00:28:11.502 [2024-11-04 16:22:30.204384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.762 [2024-11-04 16:22:30.238435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.762 [2024-11-04 16:22:30.238481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:11.762 [2024-11-04 16:22:30.238493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.070 ms 00:28:11.762 [2024-11-04 16:22:30.238518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.762 [2024-11-04 16:22:30.271657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.762 [2024-11-04 16:22:30.271819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:11.762 [2024-11-04 16:22:30.271871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.122 ms 00:28:11.762 [2024-11-04 16:22:30.271882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.762 [2024-11-04 16:22:30.271916] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:11.762 [2024-11-04 16:22:30.271932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:11.762 [2024-11-04 16:22:30.271951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:11.762 [2024-11-04 16:22:30.271963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.271973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.271984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.271995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:11.762 [2024-11-04 16:22:30.272613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.272989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.273000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:11.763 [2024-11-04 16:22:30.273016] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:11.763 [2024-11-04 16:22:30.273030] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7ec1fce4-e67d-4e51-9974-2fdcba28edba 00:28:11.763 [2024-11-04 16:22:30.273041] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:11.763 [2024-11-04 16:22:30.273050] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:11.763 [2024-11-04 16:22:30.273059] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:11.763 [2024-11-04 16:22:30.273069] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:11.763 [2024-11-04 16:22:30.273079] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:11.763 [2024-11-04 16:22:30.273089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:11.763 [2024-11-04 16:22:30.273107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:11.763 [2024-11-04 16:22:30.273116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:11.763 [2024-11-04 16:22:30.273125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:11.763 [2024-11-04 16:22:30.273134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.763 [2024-11-04 16:22:30.273144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:11.763 [2024-11-04 16:22:30.273154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:28:11.763 [2024-11-04 16:22:30.273164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.763 [2024-11-04 16:22:30.291688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.763 [2024-11-04 16:22:30.291832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:11.763 [2024-11-04 16:22:30.291868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.505 ms 00:28:11.763 [2024-11-04 16:22:30.291879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.763 [2024-11-04 16:22:30.292425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.763 [2024-11-04 16:22:30.292436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:11.763 [2024-11-04 16:22:30.292453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:28:11.763 [2024-11-04 16:22:30.292463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.763 [2024-11-04 16:22:30.340896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.763 [2024-11-04 16:22:30.340939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:11.763 [2024-11-04 16:22:30.340951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.763 [2024-11-04 16:22:30.340977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.763 [2024-11-04 16:22:30.341034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.763 [2024-11-04 16:22:30.341044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:11.763 [2024-11-04 16:22:30.341059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.763 [2024-11-04 16:22:30.341069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.763 [2024-11-04 16:22:30.341129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.763 [2024-11-04 16:22:30.341141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:11.763 [2024-11-04 16:22:30.341151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.763 [2024-11-04 16:22:30.341161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.763 [2024-11-04 16:22:30.341176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.763 [2024-11-04 16:22:30.341185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:11.763 [2024-11-04 16:22:30.341194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.763 [2024-11-04 16:22:30.341208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.763 [2024-11-04 16:22:30.460616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.763 [2024-11-04 16:22:30.460664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:11.763 [2024-11-04 16:22:30.460678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.763 [2024-11-04 16:22:30.460695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.022 [2024-11-04 16:22:30.555824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.022 [2024-11-04 16:22:30.555870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:12.023 [2024-11-04 16:22:30.555884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.023 [2024-11-04 16:22:30.555899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.023 [2024-11-04 16:22:30.555998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.023 [2024-11-04 16:22:30.556010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:12.023 [2024-11-04 16:22:30.556021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.023 [2024-11-04 16:22:30.556031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.023 [2024-11-04 16:22:30.556069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.023 [2024-11-04 16:22:30.556101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:12.023 [2024-11-04 16:22:30.556112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.023 [2024-11-04 16:22:30.556123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.023 [2024-11-04 16:22:30.556225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.023 [2024-11-04 16:22:30.556246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:12.023 [2024-11-04 16:22:30.556257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.023 [2024-11-04 16:22:30.556274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.023 [2024-11-04 16:22:30.556309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.023 [2024-11-04 16:22:30.556322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:12.023 [2024-11-04 16:22:30.556338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.023 [2024-11-04 16:22:30.556348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.023 [2024-11-04 16:22:30.556394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.023 [2024-11-04 16:22:30.556407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:12.023 [2024-11-04 16:22:30.556418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.023 [2024-11-04 16:22:30.556428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.023 [2024-11-04 16:22:30.556467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.023 [2024-11-04 16:22:30.556479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:12.023 [2024-11-04 16:22:30.556490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.023 [2024-11-04 16:22:30.556502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.023 [2024-11-04 16:22:30.556624] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 494.313 ms, result 0 00:28:12.959 00:28:12.959 00:28:12.959 16:22:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:14.861 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:14.861 16:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:14.861 16:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:14.861 16:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:14.861 16:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:14.861 16:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:14.861 16:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:14.861 16:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:14.861 Process with pid 78438 is not found 00:28:14.861 16:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78438 00:28:14.861 16:22:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 78438 ']' 00:28:14.861 16:22:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 78438 00:28:14.861 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (78438) - No such process 00:28:14.861 16:22:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 78438 is not found' 00:28:14.861 16:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:15.120 Remove shared memory files 00:28:15.120 16:22:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:15.120 16:22:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:15.120 16:22:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:15.120 16:22:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:15.120 16:22:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:15.120 16:22:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:15.120 16:22:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:15.120 ************************************ 00:28:15.120 END TEST ftl_dirty_shutdown 00:28:15.120 ************************************ 00:28:15.120 00:28:15.120 real 3m35.659s 00:28:15.120 user 4m0.515s 00:28:15.120 sys 0m38.543s 00:28:15.120 16:22:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:15.120 16:22:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:15.380 16:22:33 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:15.380 16:22:33 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:15.380 16:22:33 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:15.380 16:22:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:15.380 ************************************ 00:28:15.380 START TEST ftl_upgrade_shutdown 00:28:15.380 ************************************ 00:28:15.380 16:22:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:15.380 * Looking for test storage... 00:28:15.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:15.380 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:15.380 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:28:15.380 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:15.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.639 --rc genhtml_branch_coverage=1 00:28:15.639 --rc genhtml_function_coverage=1 00:28:15.639 --rc genhtml_legend=1 00:28:15.639 --rc geninfo_all_blocks=1 00:28:15.639 --rc geninfo_unexecuted_blocks=1 00:28:15.639 00:28:15.639 ' 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:15.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.639 --rc genhtml_branch_coverage=1 00:28:15.639 --rc genhtml_function_coverage=1 00:28:15.639 --rc genhtml_legend=1 00:28:15.639 --rc geninfo_all_blocks=1 00:28:15.639 --rc geninfo_unexecuted_blocks=1 00:28:15.639 00:28:15.639 ' 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:15.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.639 --rc genhtml_branch_coverage=1 00:28:15.639 --rc genhtml_function_coverage=1 00:28:15.639 --rc genhtml_legend=1 00:28:15.639 --rc geninfo_all_blocks=1 00:28:15.639 --rc geninfo_unexecuted_blocks=1 00:28:15.639 00:28:15.639 ' 00:28:15.639 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:15.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.639 --rc genhtml_branch_coverage=1 00:28:15.640 --rc genhtml_function_coverage=1 00:28:15.640 --rc genhtml_legend=1 00:28:15.640 --rc geninfo_all_blocks=1 00:28:15.640 --rc geninfo_unexecuted_blocks=1 00:28:15.640 00:28:15.640 ' 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80764 00:28:15.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80764 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80764 ']' 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:15.640 16:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:15.640 [2024-11-04 16:22:34.293604] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:28:15.640 [2024-11-04 16:22:34.293938] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80764 ] 00:28:15.899 [2024-11-04 16:22:34.472636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.899 [2024-11-04 16:22:34.577355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:16.840 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:17.099 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:17.099 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:17.099 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:17.099 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:28:17.099 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:17.099 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:28:17.099 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:28:17.099 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:17.359 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:17.359 { 00:28:17.359 "name": "basen1", 00:28:17.359 "aliases": [ 00:28:17.359 "dc2d31e7-5d82-4c92-bf22-30fdfebb24a6" 00:28:17.359 ], 00:28:17.359 "product_name": "NVMe disk", 00:28:17.359 "block_size": 4096, 00:28:17.359 "num_blocks": 1310720, 00:28:17.359 "uuid": "dc2d31e7-5d82-4c92-bf22-30fdfebb24a6", 00:28:17.359 "numa_id": -1, 00:28:17.359 "assigned_rate_limits": { 00:28:17.359 "rw_ios_per_sec": 0, 00:28:17.359 "rw_mbytes_per_sec": 0, 00:28:17.359 "r_mbytes_per_sec": 0, 00:28:17.359 "w_mbytes_per_sec": 0 00:28:17.359 }, 00:28:17.359 "claimed": true, 00:28:17.359 "claim_type": "read_many_write_one", 00:28:17.359 "zoned": false, 00:28:17.359 "supported_io_types": { 00:28:17.359 "read": true, 00:28:17.359 "write": true, 00:28:17.359 "unmap": true, 00:28:17.359 "flush": true, 00:28:17.359 "reset": true, 00:28:17.359 "nvme_admin": true, 00:28:17.359 "nvme_io": true, 00:28:17.359 "nvme_io_md": false, 00:28:17.359 "write_zeroes": true, 00:28:17.359 "zcopy": false, 00:28:17.359 "get_zone_info": false, 00:28:17.359 "zone_management": false, 00:28:17.359 "zone_append": false, 00:28:17.359 "compare": true, 00:28:17.359 "compare_and_write": false, 00:28:17.359 "abort": true, 00:28:17.359 "seek_hole": false, 00:28:17.359 "seek_data": false, 00:28:17.359 "copy": true, 00:28:17.359 "nvme_iov_md": false 00:28:17.359 }, 00:28:17.359 "driver_specific": { 00:28:17.359 "nvme": [ 00:28:17.359 { 00:28:17.359 "pci_address": "0000:00:11.0", 00:28:17.359 "trid": { 00:28:17.359 "trtype": "PCIe", 00:28:17.359 "traddr": "0000:00:11.0" 00:28:17.359 }, 00:28:17.359 "ctrlr_data": { 00:28:17.359 "cntlid": 0, 00:28:17.359 "vendor_id": "0x1b36", 00:28:17.359 "model_number": "QEMU NVMe Ctrl", 00:28:17.359 "serial_number": "12341", 00:28:17.359 "firmware_revision": "8.0.0", 00:28:17.359 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:17.359 "oacs": { 00:28:17.359 "security": 0, 00:28:17.359 "format": 1, 00:28:17.359 "firmware": 0, 00:28:17.359 "ns_manage": 1 00:28:17.359 }, 00:28:17.359 "multi_ctrlr": false, 00:28:17.359 "ana_reporting": false 00:28:17.359 }, 00:28:17.359 "vs": { 00:28:17.359 "nvme_version": "1.4" 00:28:17.359 }, 00:28:17.359 "ns_data": { 00:28:17.359 "id": 1, 00:28:17.359 "can_share": false 00:28:17.359 } 00:28:17.359 } 00:28:17.359 ], 00:28:17.359 "mp_policy": "active_passive" 00:28:17.359 } 00:28:17.359 } 00:28:17.359 ]' 00:28:17.359 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:17.359 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:28:17.359 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:17.359 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:28:17.359 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:28:17.359 16:22:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:28:17.359 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:17.359 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:17.359 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:17.359 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:17.359 16:22:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:17.618 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=323048fa-1657-4333-a4c4-269551e56038 00:28:17.618 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:17.618 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 323048fa-1657-4333-a4c4-269551e56038 00:28:17.877 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:17.877 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=d6cd4f8c-9cdf-4ef2-9b7d-70a65afc9ccd 00:28:17.877 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u d6cd4f8c-9cdf-4ef2-9b7d-70a65afc9ccd 00:28:18.135 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=93133bd7-6dad-48e1-8f9b-02daf5e42fcb 00:28:18.135 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 93133bd7-6dad-48e1-8f9b-02daf5e42fcb ]] 00:28:18.135 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 93133bd7-6dad-48e1-8f9b-02daf5e42fcb 5120 00:28:18.135 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:18.135 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:18.135 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=93133bd7-6dad-48e1-8f9b-02daf5e42fcb 00:28:18.135 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:18.135 16:22:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 93133bd7-6dad-48e1-8f9b-02daf5e42fcb 00:28:18.135 16:22:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=93133bd7-6dad-48e1-8f9b-02daf5e42fcb 00:28:18.135 16:22:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:18.135 16:22:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:28:18.136 16:22:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:28:18.136 16:22:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93133bd7-6dad-48e1-8f9b-02daf5e42fcb 00:28:18.394 16:22:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:18.394 { 00:28:18.394 "name": "93133bd7-6dad-48e1-8f9b-02daf5e42fcb", 00:28:18.394 "aliases": [ 00:28:18.394 "lvs/basen1p0" 00:28:18.394 ], 00:28:18.394 "product_name": "Logical Volume", 00:28:18.394 "block_size": 4096, 00:28:18.394 "num_blocks": 5242880, 00:28:18.394 "uuid": "93133bd7-6dad-48e1-8f9b-02daf5e42fcb", 00:28:18.394 "assigned_rate_limits": { 00:28:18.394 "rw_ios_per_sec": 0, 00:28:18.394 "rw_mbytes_per_sec": 0, 00:28:18.394 "r_mbytes_per_sec": 0, 00:28:18.394 "w_mbytes_per_sec": 0 00:28:18.394 }, 00:28:18.394 "claimed": false, 00:28:18.394 "zoned": false, 00:28:18.394 "supported_io_types": { 00:28:18.394 "read": true, 00:28:18.394 "write": true, 00:28:18.394 "unmap": true, 00:28:18.394 "flush": false, 00:28:18.394 "reset": true, 00:28:18.394 "nvme_admin": false, 00:28:18.394 "nvme_io": false, 00:28:18.394 "nvme_io_md": false, 00:28:18.394 "write_zeroes": true, 00:28:18.394 "zcopy": false, 00:28:18.394 "get_zone_info": false, 00:28:18.394 "zone_management": false, 00:28:18.394 "zone_append": false, 00:28:18.394 "compare": false, 00:28:18.394 "compare_and_write": false, 00:28:18.394 "abort": false, 00:28:18.394 "seek_hole": true, 00:28:18.394 "seek_data": true, 00:28:18.394 "copy": false, 00:28:18.394 "nvme_iov_md": false 00:28:18.394 }, 00:28:18.394 "driver_specific": { 00:28:18.395 "lvol": { 00:28:18.395 "lvol_store_uuid": "d6cd4f8c-9cdf-4ef2-9b7d-70a65afc9ccd", 00:28:18.395 "base_bdev": "basen1", 00:28:18.395 "thin_provision": true, 00:28:18.395 "num_allocated_clusters": 0, 00:28:18.395 "snapshot": false, 00:28:18.395 "clone": false, 00:28:18.395 "esnap_clone": false 00:28:18.395 } 00:28:18.395 } 00:28:18.395 } 00:28:18.395 ]' 00:28:18.395 16:22:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:18.395 16:22:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:28:18.395 16:22:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:18.395 16:22:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:28:18.395 16:22:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:28:18.395 16:22:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:28:18.395 16:22:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:18.395 16:22:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:18.395 16:22:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:18.653 16:22:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:18.653 16:22:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:18.653 16:22:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:18.912 16:22:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:18.912 16:22:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:18.912 16:22:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 93133bd7-6dad-48e1-8f9b-02daf5e42fcb -c cachen1p0 --l2p_dram_limit 2 00:28:19.171 [2024-11-04 16:22:37.671373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.171 [2024-11-04 16:22:37.671423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:19.171 [2024-11-04 16:22:37.671442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:19.171 [2024-11-04 16:22:37.671453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.171 [2024-11-04 16:22:37.671514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.171 [2024-11-04 16:22:37.671526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:19.172 [2024-11-04 16:22:37.671539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:28:19.172 [2024-11-04 16:22:37.671549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.172 [2024-11-04 16:22:37.671572] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:19.172 [2024-11-04 16:22:37.672538] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:19.172 [2024-11-04 16:22:37.672570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.172 [2024-11-04 16:22:37.672581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:19.172 [2024-11-04 16:22:37.672595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.001 ms 00:28:19.172 [2024-11-04 16:22:37.672605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.172 [2024-11-04 16:22:37.672687] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f7e20e0b-2d25-4b61-a652-08f699c4c408 00:28:19.172 [2024-11-04 16:22:37.674155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.172 [2024-11-04 16:22:37.674194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:19.172 [2024-11-04 16:22:37.674206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:19.172 [2024-11-04 16:22:37.674220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.172 [2024-11-04 16:22:37.681851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.172 [2024-11-04 16:22:37.681881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:19.172 [2024-11-04 16:22:37.681895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.599 ms 00:28:19.172 [2024-11-04 16:22:37.681923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.172 [2024-11-04 16:22:37.681967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.172 [2024-11-04 16:22:37.681982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:19.172 [2024-11-04 16:22:37.681993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:19.172 [2024-11-04 16:22:37.682008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.172 [2024-11-04 16:22:37.682058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.172 [2024-11-04 16:22:37.682073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:19.172 [2024-11-04 16:22:37.682084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:19.172 [2024-11-04 16:22:37.682101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.172 [2024-11-04 16:22:37.682126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:19.172 [2024-11-04 16:22:37.687135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.172 [2024-11-04 16:22:37.687169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:19.172 [2024-11-04 16:22:37.687184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.022 ms 00:28:19.172 [2024-11-04 16:22:37.687210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.172 [2024-11-04 16:22:37.687240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.172 [2024-11-04 16:22:37.687250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:19.172 [2024-11-04 16:22:37.687263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:19.172 [2024-11-04 16:22:37.687273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.172 [2024-11-04 16:22:37.687316] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:19.172 [2024-11-04 16:22:37.687437] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:19.172 [2024-11-04 16:22:37.687456] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:19.172 [2024-11-04 16:22:37.687469] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:19.172 [2024-11-04 16:22:37.687485] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:19.172 [2024-11-04 16:22:37.687496] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:19.172 [2024-11-04 16:22:37.687509] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:19.172 [2024-11-04 16:22:37.687519] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:19.172 [2024-11-04 16:22:37.687534] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:19.172 [2024-11-04 16:22:37.687544] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:19.172 [2024-11-04 16:22:37.687556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.172 [2024-11-04 16:22:37.687566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:19.172 [2024-11-04 16:22:37.687579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.242 ms 00:28:19.172 [2024-11-04 16:22:37.687588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.172 [2024-11-04 16:22:37.687660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.172 [2024-11-04 16:22:37.687670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:19.172 [2024-11-04 16:22:37.687684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:28:19.172 [2024-11-04 16:22:37.687704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.172 [2024-11-04 16:22:37.687808] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:19.172 [2024-11-04 16:22:37.687820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:19.172 [2024-11-04 16:22:37.687833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:19.172 [2024-11-04 16:22:37.687843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:19.172 [2024-11-04 16:22:37.687856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:19.172 [2024-11-04 16:22:37.687865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:19.172 [2024-11-04 16:22:37.687876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:19.172 [2024-11-04 16:22:37.687885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:19.172 [2024-11-04 16:22:37.687897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:19.172 [2024-11-04 16:22:37.687906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:19.172 [2024-11-04 16:22:37.687918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:19.172 [2024-11-04 16:22:37.687928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:19.172 [2024-11-04 16:22:37.687939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:19.172 [2024-11-04 16:22:37.687949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:19.172 [2024-11-04 16:22:37.687961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:19.172 [2024-11-04 16:22:37.687969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:19.172 [2024-11-04 16:22:37.687983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:19.172 [2024-11-04 16:22:37.687992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:19.172 [2024-11-04 16:22:37.688005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:19.172 [2024-11-04 16:22:37.688015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:19.172 [2024-11-04 16:22:37.688026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:19.172 [2024-11-04 16:22:37.688035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:19.172 [2024-11-04 16:22:37.688047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:19.172 [2024-11-04 16:22:37.688056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:19.172 [2024-11-04 16:22:37.688067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:19.172 [2024-11-04 16:22:37.688076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:19.172 [2024-11-04 16:22:37.688088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:19.172 [2024-11-04 16:22:37.688096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:19.172 [2024-11-04 16:22:37.688108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:19.172 [2024-11-04 16:22:37.688118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:19.172 [2024-11-04 16:22:37.688129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:19.172 [2024-11-04 16:22:37.688137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:19.172 [2024-11-04 16:22:37.688151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:19.172 [2024-11-04 16:22:37.688159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:19.172 [2024-11-04 16:22:37.688171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:19.172 [2024-11-04 16:22:37.688180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:19.172 [2024-11-04 16:22:37.688190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:19.172 [2024-11-04 16:22:37.688199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:19.172 [2024-11-04 16:22:37.688211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:19.172 [2024-11-04 16:22:37.688219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:19.172 [2024-11-04 16:22:37.688231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:19.172 [2024-11-04 16:22:37.688239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:19.172 [2024-11-04 16:22:37.688250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:19.172 [2024-11-04 16:22:37.688259] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:19.172 [2024-11-04 16:22:37.688271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:19.172 [2024-11-04 16:22:37.688283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:19.172 [2024-11-04 16:22:37.688297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:19.172 [2024-11-04 16:22:37.688307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:19.172 [2024-11-04 16:22:37.688322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:19.172 [2024-11-04 16:22:37.688331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:19.172 [2024-11-04 16:22:37.688343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:19.172 [2024-11-04 16:22:37.688352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:19.172 [2024-11-04 16:22:37.688363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:19.172 [2024-11-04 16:22:37.688377] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:19.173 [2024-11-04 16:22:37.688392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.173 [2024-11-04 16:22:37.688406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:19.173 [2024-11-04 16:22:37.688418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:19.173 [2024-11-04 16:22:37.688428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:19.173 [2024-11-04 16:22:37.688440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:19.173 [2024-11-04 16:22:37.688450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:19.173 [2024-11-04 16:22:37.688463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:19.173 [2024-11-04 16:22:37.688473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:19.173 [2024-11-04 16:22:37.688486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:19.173 [2024-11-04 16:22:37.688496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:19.173 [2024-11-04 16:22:37.688510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:19.173 [2024-11-04 16:22:37.688520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:19.173 [2024-11-04 16:22:37.688532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:19.173 [2024-11-04 16:22:37.688542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:19.173 [2024-11-04 16:22:37.688556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:19.173 [2024-11-04 16:22:37.688565] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:19.173 [2024-11-04 16:22:37.688579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:19.173 [2024-11-04 16:22:37.688590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:19.173 [2024-11-04 16:22:37.688602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:19.173 [2024-11-04 16:22:37.688612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:19.173 [2024-11-04 16:22:37.688625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:19.173 [2024-11-04 16:22:37.688636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.173 [2024-11-04 16:22:37.688648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:19.173 [2024-11-04 16:22:37.688658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.898 ms 00:28:19.173 [2024-11-04 16:22:37.688670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.173 [2024-11-04 16:22:37.688708] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:19.173 [2024-11-04 16:22:37.688726] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:22.461 [2024-11-04 16:22:41.176651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.461 [2024-11-04 16:22:41.176946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:22.461 [2024-11-04 16:22:41.177042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3493.603 ms 00:28:22.461 [2024-11-04 16:22:41.177085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.719 [2024-11-04 16:22:41.209773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.719 [2024-11-04 16:22:41.210005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:22.719 [2024-11-04 16:22:41.210096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.381 ms 00:28:22.719 [2024-11-04 16:22:41.210139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.719 [2024-11-04 16:22:41.210244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.719 [2024-11-04 16:22:41.210285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:22.719 [2024-11-04 16:22:41.210379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:22.719 [2024-11-04 16:22:41.210427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.719 [2024-11-04 16:22:41.255347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.719 [2024-11-04 16:22:41.255519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:22.719 [2024-11-04 16:22:41.255653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.917 ms 00:28:22.719 [2024-11-04 16:22:41.255696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.719 [2024-11-04 16:22:41.255763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.719 [2024-11-04 16:22:41.255801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:22.719 [2024-11-04 16:22:41.255832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:22.719 [2024-11-04 16:22:41.255924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.719 [2024-11-04 16:22:41.256449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.719 [2024-11-04 16:22:41.256498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:22.719 [2024-11-04 16:22:41.256587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.427 ms 00:28:22.719 [2024-11-04 16:22:41.256674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.719 [2024-11-04 16:22:41.256763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.719 [2024-11-04 16:22:41.256850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:22.719 [2024-11-04 16:22:41.256886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:22.719 [2024-11-04 16:22:41.256922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.719 [2024-11-04 16:22:41.276598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.719 [2024-11-04 16:22:41.276779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:22.719 [2024-11-04 16:22:41.276856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.615 ms 00:28:22.719 [2024-11-04 16:22:41.276895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.719 [2024-11-04 16:22:41.288885] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:22.719 [2024-11-04 16:22:41.290069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.719 [2024-11-04 16:22:41.290186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:22.719 [2024-11-04 16:22:41.290263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.090 ms 00:28:22.719 [2024-11-04 16:22:41.290298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.719 [2024-11-04 16:22:41.341315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.719 [2024-11-04 16:22:41.341455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:22.719 [2024-11-04 16:22:41.341617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.046 ms 00:28:22.719 [2024-11-04 16:22:41.341657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.719 [2024-11-04 16:22:41.341777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.719 [2024-11-04 16:22:41.341911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:22.719 [2024-11-04 16:22:41.342022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:28:22.719 [2024-11-04 16:22:41.342061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.719 [2024-11-04 16:22:41.376203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.719 [2024-11-04 16:22:41.376331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:22.719 [2024-11-04 16:22:41.376426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.114 ms 00:28:22.719 [2024-11-04 16:22:41.376462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.719 [2024-11-04 16:22:41.411272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.719 [2024-11-04 16:22:41.411408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:22.719 [2024-11-04 16:22:41.411483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.799 ms 00:28:22.719 [2024-11-04 16:22:41.411518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.719 [2024-11-04 16:22:41.412216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.719 [2024-11-04 16:22:41.412285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:22.719 [2024-11-04 16:22:41.412327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.603 ms 00:28:22.719 [2024-11-04 16:22:41.412360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.978 [2024-11-04 16:22:41.510904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.978 [2024-11-04 16:22:41.511044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:22.978 [2024-11-04 16:22:41.511139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 98.623 ms 00:28:22.978 [2024-11-04 16:22:41.511176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.978 [2024-11-04 16:22:41.545920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.978 [2024-11-04 16:22:41.546051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:22.978 [2024-11-04 16:22:41.546154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.659 ms 00:28:22.978 [2024-11-04 16:22:41.546191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.978 [2024-11-04 16:22:41.579588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.978 [2024-11-04 16:22:41.579727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:22.978 [2024-11-04 16:22:41.579849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.387 ms 00:28:22.978 [2024-11-04 16:22:41.579889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.978 [2024-11-04 16:22:41.613863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.978 [2024-11-04 16:22:41.613995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:22.978 [2024-11-04 16:22:41.614035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.959 ms 00:28:22.978 [2024-11-04 16:22:41.614045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.978 [2024-11-04 16:22:41.614091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.978 [2024-11-04 16:22:41.614102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:22.978 [2024-11-04 16:22:41.614119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:22.978 [2024-11-04 16:22:41.614129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.978 [2024-11-04 16:22:41.614229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.978 [2024-11-04 16:22:41.614244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:22.978 [2024-11-04 16:22:41.614257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:22.978 [2024-11-04 16:22:41.614268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.978 [2024-11-04 16:22:41.615339] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3949.960 ms, result 0 00:28:22.978 { 00:28:22.978 "name": "ftl", 00:28:22.978 "uuid": "f7e20e0b-2d25-4b61-a652-08f699c4c408" 00:28:22.978 } 00:28:22.978 16:22:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:23.236 [2024-11-04 16:22:41.834155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.236 16:22:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:23.495 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:23.754 [2024-11-04 16:22:42.222030] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:23.754 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:23.754 [2024-11-04 16:22:42.407424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:23.754 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:24.322 Fill FTL, iteration 1 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80887 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80887 /var/tmp/spdk.tgt.sock 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80887 ']' 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:24.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:24.322 16:22:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:24.322 [2024-11-04 16:22:42.868657] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:28:24.322 [2024-11-04 16:22:42.868803] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80887 ] 00:28:24.582 [2024-11-04 16:22:43.055114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.582 [2024-11-04 16:22:43.160955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.518 16:22:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:25.518 16:22:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:25.518 16:22:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:25.518 ftln1 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80887 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80887 ']' 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80887 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80887 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:25.776 killing process with pid 80887 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80887' 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80887 00:28:25.776 16:22:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80887 00:28:28.312 16:22:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:28.312 16:22:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:28.312 [2024-11-04 16:22:46.741631] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:28:28.312 [2024-11-04 16:22:46.741768] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80942 ] 00:28:28.312 [2024-11-04 16:22:46.926995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.570 [2024-11-04 16:22:47.054361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.948  [2024-11-04T16:22:49.606Z] Copying: 267/1024 [MB] (267 MBps) [2024-11-04T16:22:50.981Z] Copying: 517/1024 [MB] (250 MBps) [2024-11-04T16:22:51.548Z] Copying: 787/1024 [MB] (270 MBps) [2024-11-04T16:22:52.925Z] Copying: 1024/1024 [MB] (average 264 MBps) 00:28:34.203 00:28:34.203 Calculate MD5 checksum, iteration 1 00:28:34.203 16:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:34.203 16:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:34.203 16:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:34.203 16:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:34.203 16:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:34.203 16:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:34.203 16:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:34.203 16:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:34.203 [2024-11-04 16:22:52.694997] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:28:34.204 [2024-11-04 16:22:52.695556] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81000 ] 00:28:34.204 [2024-11-04 16:22:52.876865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.462 [2024-11-04 16:22:53.004280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.839  [2024-11-04T16:22:55.128Z] Copying: 671/1024 [MB] (671 MBps) [2024-11-04T16:22:56.065Z] Copying: 1024/1024 [MB] (average 659 MBps) 00:28:37.343 00:28:37.343 16:22:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:37.343 16:22:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:39.244 16:22:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:39.244 16:22:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a7da18d8e6443d13e5319ac5ae8f28e1 00:28:39.244 Fill FTL, iteration 2 00:28:39.244 16:22:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:39.244 16:22:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:39.244 16:22:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:39.244 16:22:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:39.244 16:22:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:39.244 16:22:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:39.244 16:22:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:39.244 16:22:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:39.244 16:22:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:39.244 [2024-11-04 16:22:57.803349] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:28:39.244 [2024-11-04 16:22:57.803679] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81056 ] 00:28:39.502 [2024-11-04 16:22:57.983028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.502 [2024-11-04 16:22:58.107047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.883  [2024-11-04T16:23:00.982Z] Copying: 269/1024 [MB] (269 MBps) [2024-11-04T16:23:01.918Z] Copying: 523/1024 [MB] (254 MBps) [2024-11-04T16:23:02.853Z] Copying: 779/1024 [MB] (256 MBps) [2024-11-04T16:23:03.790Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:28:45.068 00:28:45.068 16:23:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:45.068 16:23:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:45.068 Calculate MD5 checksum, iteration 2 00:28:45.068 16:23:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:45.068 16:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:45.068 16:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:45.068 16:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:45.068 16:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:45.068 16:23:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:45.327 [2024-11-04 16:23:03.862872] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:28:45.327 [2024-11-04 16:23:03.863212] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81121 ] 00:28:45.327 [2024-11-04 16:23:04.044308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.585 [2024-11-04 16:23:04.180386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.488  [2024-11-04T16:23:06.469Z] Copying: 669/1024 [MB] (669 MBps) [2024-11-04T16:23:07.848Z] Copying: 1024/1024 [MB] (average 663 MBps) 00:28:49.126 00:28:49.126 16:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:49.126 16:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:51.030 16:23:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:51.030 16:23:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=afa82a5c7c867d8f22081b447c417fa2 00:28:51.031 16:23:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:51.031 16:23:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:51.031 16:23:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:51.031 [2024-11-04 16:23:09.603990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:51.031 [2024-11-04 16:23:09.604044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:51.031 [2024-11-04 16:23:09.604063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:51.031 [2024-11-04 16:23:09.604074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:51.031 [2024-11-04 16:23:09.604101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:51.031 [2024-11-04 16:23:09.604112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:51.031 [2024-11-04 16:23:09.604129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:51.031 [2024-11-04 16:23:09.604139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:51.031 [2024-11-04 16:23:09.604160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:51.031 [2024-11-04 16:23:09.604171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:51.031 [2024-11-04 16:23:09.604181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:51.031 [2024-11-04 16:23:09.604192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:51.031 [2024-11-04 16:23:09.604257] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.260 ms, result 0 00:28:51.031 true 00:28:51.031 16:23:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:51.290 { 00:28:51.290 "name": "ftl", 00:28:51.290 "properties": [ 00:28:51.290 { 00:28:51.290 "name": "superblock_version", 00:28:51.290 "value": 5, 00:28:51.290 "read-only": true 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "name": "base_device", 00:28:51.290 "bands": [ 00:28:51.290 { 00:28:51.290 "id": 0, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 1, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 2, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 3, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 4, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 5, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 6, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 7, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 8, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 9, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 10, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 11, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 12, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 13, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 14, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 15, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 16, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 17, 00:28:51.290 "state": "FREE", 00:28:51.290 "validity": 0.0 00:28:51.290 } 00:28:51.290 ], 00:28:51.290 "read-only": true 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "name": "cache_device", 00:28:51.290 "type": "bdev", 00:28:51.290 "chunks": [ 00:28:51.290 { 00:28:51.290 "id": 0, 00:28:51.290 "state": "INACTIVE", 00:28:51.290 "utilization": 0.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 1, 00:28:51.290 "state": "CLOSED", 00:28:51.290 "utilization": 1.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 2, 00:28:51.290 "state": "CLOSED", 00:28:51.290 "utilization": 1.0 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 3, 00:28:51.290 "state": "OPEN", 00:28:51.290 "utilization": 0.001953125 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "id": 4, 00:28:51.290 "state": "OPEN", 00:28:51.290 "utilization": 0.0 00:28:51.290 } 00:28:51.290 ], 00:28:51.290 "read-only": true 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "name": "verbose_mode", 00:28:51.290 "value": true, 00:28:51.290 "unit": "", 00:28:51.290 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:51.290 }, 00:28:51.290 { 00:28:51.290 "name": "prep_upgrade_on_shutdown", 00:28:51.290 "value": false, 00:28:51.290 "unit": "", 00:28:51.290 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:51.290 } 00:28:51.290 ] 00:28:51.290 } 00:28:51.290 16:23:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:51.549 [2024-11-04 16:23:10.039988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:51.549 [2024-11-04 16:23:10.040823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:51.549 [2024-11-04 16:23:10.040909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:51.549 [2024-11-04 16:23:10.040957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:51.549 [2024-11-04 16:23:10.041123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:51.549 [2024-11-04 16:23:10.041162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:51.549 [2024-11-04 16:23:10.041198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:51.549 [2024-11-04 16:23:10.041230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:51.549 [2024-11-04 16:23:10.041300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:51.549 [2024-11-04 16:23:10.041334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:51.549 [2024-11-04 16:23:10.041367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:51.549 [2024-11-04 16:23:10.041398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:51.549 [2024-11-04 16:23:10.041602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.515 ms, result 0 00:28:51.549 true 00:28:51.549 16:23:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:51.549 16:23:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:51.549 16:23:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:51.808 16:23:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:51.808 16:23:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:51.808 16:23:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:51.808 [2024-11-04 16:23:10.488934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:51.808 [2024-11-04 16:23:10.488979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:51.808 [2024-11-04 16:23:10.488996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:51.808 [2024-11-04 16:23:10.489006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:51.808 [2024-11-04 16:23:10.489030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:51.808 [2024-11-04 16:23:10.489040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:51.808 [2024-11-04 16:23:10.489051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:51.809 [2024-11-04 16:23:10.489060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:51.809 [2024-11-04 16:23:10.489080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:51.809 [2024-11-04 16:23:10.489090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:51.809 [2024-11-04 16:23:10.489101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:51.809 [2024-11-04 16:23:10.489110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:51.809 [2024-11-04 16:23:10.489166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.224 ms, result 0 00:28:51.809 true 00:28:51.809 16:23:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:52.071 { 00:28:52.071 "name": "ftl", 00:28:52.071 "properties": [ 00:28:52.071 { 00:28:52.071 "name": "superblock_version", 00:28:52.071 "value": 5, 00:28:52.071 "read-only": true 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "name": "base_device", 00:28:52.071 "bands": [ 00:28:52.071 { 00:28:52.071 "id": 0, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 1, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 2, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 3, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 4, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 5, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 6, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 7, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 8, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 9, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 10, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 11, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 12, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 13, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 14, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 15, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 16, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "id": 17, 00:28:52.071 "state": "FREE", 00:28:52.071 "validity": 0.0 00:28:52.071 } 00:28:52.071 ], 00:28:52.071 "read-only": true 00:28:52.071 }, 00:28:52.071 { 00:28:52.071 "name": "cache_device", 00:28:52.071 "type": "bdev", 00:28:52.071 "chunks": [ 00:28:52.071 { 00:28:52.071 "id": 0, 00:28:52.071 "state": "INACTIVE", 00:28:52.071 "utilization": 0.0 00:28:52.071 }, 00:28:52.072 { 00:28:52.072 "id": 1, 00:28:52.072 "state": "CLOSED", 00:28:52.072 "utilization": 1.0 00:28:52.072 }, 00:28:52.072 { 00:28:52.072 "id": 2, 00:28:52.072 "state": "CLOSED", 00:28:52.072 "utilization": 1.0 00:28:52.072 }, 00:28:52.072 { 00:28:52.072 "id": 3, 00:28:52.072 "state": "OPEN", 00:28:52.072 "utilization": 0.001953125 00:28:52.072 }, 00:28:52.072 { 00:28:52.072 "id": 4, 00:28:52.072 "state": "OPEN", 00:28:52.072 "utilization": 0.0 00:28:52.072 } 00:28:52.072 ], 00:28:52.072 "read-only": true 00:28:52.072 }, 00:28:52.072 { 00:28:52.072 "name": "verbose_mode", 00:28:52.072 "value": true, 00:28:52.072 "unit": "", 00:28:52.072 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:52.072 }, 00:28:52.072 { 00:28:52.072 "name": "prep_upgrade_on_shutdown", 00:28:52.072 "value": true, 00:28:52.072 "unit": "", 00:28:52.072 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:52.072 } 00:28:52.072 ] 00:28:52.072 } 00:28:52.072 16:23:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:52.072 16:23:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80764 ]] 00:28:52.072 16:23:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80764 00:28:52.072 16:23:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80764 ']' 00:28:52.072 16:23:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80764 00:28:52.072 16:23:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:52.072 16:23:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:52.072 16:23:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80764 00:28:52.072 killing process with pid 80764 00:28:52.072 16:23:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:52.072 16:23:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:52.072 16:23:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80764' 00:28:52.072 16:23:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80764 00:28:52.072 16:23:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80764 00:28:53.450 [2024-11-04 16:23:11.823630] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:53.450 [2024-11-04 16:23:11.843171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:53.450 [2024-11-04 16:23:11.843211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:53.450 [2024-11-04 16:23:11.843227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:53.450 [2024-11-04 16:23:11.843253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:53.450 [2024-11-04 16:23:11.843276] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:53.450 [2024-11-04 16:23:11.847439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:53.450 [2024-11-04 16:23:11.847469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:53.450 [2024-11-04 16:23:11.847481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.154 ms 00:28:53.450 [2024-11-04 16:23:11.847507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.569 [2024-11-04 16:23:19.220053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.569 [2024-11-04 16:23:19.220117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:01.569 [2024-11-04 16:23:19.220135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7384.480 ms 00:29:01.569 [2024-11-04 16:23:19.220153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.569 [2024-11-04 16:23:19.221275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.569 [2024-11-04 16:23:19.221304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:01.569 [2024-11-04 16:23:19.221317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.105 ms 00:29:01.569 [2024-11-04 16:23:19.221328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.569 [2024-11-04 16:23:19.222234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.569 [2024-11-04 16:23:19.222253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:01.569 [2024-11-04 16:23:19.222266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.878 ms 00:29:01.569 [2024-11-04 16:23:19.222277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.569 [2024-11-04 16:23:19.237356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.569 [2024-11-04 16:23:19.237395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:01.569 [2024-11-04 16:23:19.237408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.059 ms 00:29:01.569 [2024-11-04 16:23:19.237419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.569 [2024-11-04 16:23:19.246367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.569 [2024-11-04 16:23:19.246406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:01.569 [2024-11-04 16:23:19.246421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.925 ms 00:29:01.569 [2024-11-04 16:23:19.246431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.569 [2024-11-04 16:23:19.246523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.569 [2024-11-04 16:23:19.246537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:01.569 [2024-11-04 16:23:19.246549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:29:01.569 [2024-11-04 16:23:19.246566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.569 [2024-11-04 16:23:19.260341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.569 [2024-11-04 16:23:19.260375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:01.569 [2024-11-04 16:23:19.260388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.780 ms 00:29:01.569 [2024-11-04 16:23:19.260398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.569 [2024-11-04 16:23:19.274139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.569 [2024-11-04 16:23:19.274172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:01.569 [2024-11-04 16:23:19.274184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.730 ms 00:29:01.569 [2024-11-04 16:23:19.274194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.569 [2024-11-04 16:23:19.287780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.569 [2024-11-04 16:23:19.287960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:01.569 [2024-11-04 16:23:19.287997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.574 ms 00:29:01.569 [2024-11-04 16:23:19.288009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.569 [2024-11-04 16:23:19.301891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.569 [2024-11-04 16:23:19.301923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:01.569 [2024-11-04 16:23:19.301935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.792 ms 00:29:01.569 [2024-11-04 16:23:19.301944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.569 [2024-11-04 16:23:19.301977] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:01.569 [2024-11-04 16:23:19.301993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:01.569 [2024-11-04 16:23:19.302006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:01.569 [2024-11-04 16:23:19.302030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:01.569 [2024-11-04 16:23:19.302041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:01.569 [2024-11-04 16:23:19.302200] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:01.569 [2024-11-04 16:23:19.302211] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f7e20e0b-2d25-4b61-a652-08f699c4c408 00:29:01.569 [2024-11-04 16:23:19.302221] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:01.569 [2024-11-04 16:23:19.302232] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:01.569 [2024-11-04 16:23:19.302242] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:01.569 [2024-11-04 16:23:19.302254] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:01.569 [2024-11-04 16:23:19.302263] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:01.570 [2024-11-04 16:23:19.302273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:01.570 [2024-11-04 16:23:19.302288] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:01.570 [2024-11-04 16:23:19.302297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:01.570 [2024-11-04 16:23:19.302306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:01.570 [2024-11-04 16:23:19.302317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.570 [2024-11-04 16:23:19.302333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:01.570 [2024-11-04 16:23:19.302343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.341 ms 00:29:01.570 [2024-11-04 16:23:19.302353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.322169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.570 [2024-11-04 16:23:19.322337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:01.570 [2024-11-04 16:23:19.322358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.831 ms 00:29:01.570 [2024-11-04 16:23:19.322377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.323031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.570 [2024-11-04 16:23:19.323046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:01.570 [2024-11-04 16:23:19.323058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.618 ms 00:29:01.570 [2024-11-04 16:23:19.323068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.388182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.570 [2024-11-04 16:23:19.388218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:01.570 [2024-11-04 16:23:19.388231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.570 [2024-11-04 16:23:19.388247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.388286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.570 [2024-11-04 16:23:19.388297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:01.570 [2024-11-04 16:23:19.388308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.570 [2024-11-04 16:23:19.388318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.388407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.570 [2024-11-04 16:23:19.388421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:01.570 [2024-11-04 16:23:19.388432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.570 [2024-11-04 16:23:19.388450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.388474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.570 [2024-11-04 16:23:19.388485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:01.570 [2024-11-04 16:23:19.388495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.570 [2024-11-04 16:23:19.388505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.513072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.570 [2024-11-04 16:23:19.513123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:01.570 [2024-11-04 16:23:19.513140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.570 [2024-11-04 16:23:19.513158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.612115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.570 [2024-11-04 16:23:19.612169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:01.570 [2024-11-04 16:23:19.612184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.570 [2024-11-04 16:23:19.612196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.612330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.570 [2024-11-04 16:23:19.612345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:01.570 [2024-11-04 16:23:19.612357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.570 [2024-11-04 16:23:19.612368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.612416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.570 [2024-11-04 16:23:19.612433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:01.570 [2024-11-04 16:23:19.612444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.570 [2024-11-04 16:23:19.612455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.612577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.570 [2024-11-04 16:23:19.612592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:01.570 [2024-11-04 16:23:19.612604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.570 [2024-11-04 16:23:19.612614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.612654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.570 [2024-11-04 16:23:19.612672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:01.570 [2024-11-04 16:23:19.612683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.570 [2024-11-04 16:23:19.612694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.612744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.570 [2024-11-04 16:23:19.612780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:01.570 [2024-11-04 16:23:19.612791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.570 [2024-11-04 16:23:19.612809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.612866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.570 [2024-11-04 16:23:19.612883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:01.570 [2024-11-04 16:23:19.612894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.570 [2024-11-04 16:23:19.612905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.570 [2024-11-04 16:23:19.613053] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7782.456 ms, result 0 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:04.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81318 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81318 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81318 ']' 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:04.107 16:23:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:04.365 [2024-11-04 16:23:22.862661] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:29:04.366 [2024-11-04 16:23:22.862945] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81318 ] 00:29:04.366 [2024-11-04 16:23:23.044865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.625 [2024-11-04 16:23:23.149657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.563 [2024-11-04 16:23:23.962911] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:05.563 [2024-11-04 16:23:23.962982] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:05.563 [2024-11-04 16:23:24.108902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.563 [2024-11-04 16:23:24.108944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:05.563 [2024-11-04 16:23:24.108959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:05.563 [2024-11-04 16:23:24.108969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.563 [2024-11-04 16:23:24.109019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.563 [2024-11-04 16:23:24.109030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:05.563 [2024-11-04 16:23:24.109040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:05.563 [2024-11-04 16:23:24.109049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.563 [2024-11-04 16:23:24.109077] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:05.563 [2024-11-04 16:23:24.109948] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:05.563 [2024-11-04 16:23:24.109976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.563 [2024-11-04 16:23:24.109987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:05.563 [2024-11-04 16:23:24.109998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.911 ms 00:29:05.563 [2024-11-04 16:23:24.110009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.563 [2024-11-04 16:23:24.111506] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:05.563 [2024-11-04 16:23:24.129932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.563 [2024-11-04 16:23:24.129972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:05.563 [2024-11-04 16:23:24.129993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.456 ms 00:29:05.563 [2024-11-04 16:23:24.130003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.563 [2024-11-04 16:23:24.130081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.563 [2024-11-04 16:23:24.130095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:05.563 [2024-11-04 16:23:24.130106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:05.563 [2024-11-04 16:23:24.130116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.563 [2024-11-04 16:23:24.136978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.563 [2024-11-04 16:23:24.137156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:05.563 [2024-11-04 16:23:24.137177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.793 ms 00:29:05.563 [2024-11-04 16:23:24.137187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.563 [2024-11-04 16:23:24.137262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.563 [2024-11-04 16:23:24.137275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:05.563 [2024-11-04 16:23:24.137287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:29:05.563 [2024-11-04 16:23:24.137297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.563 [2024-11-04 16:23:24.137342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.563 [2024-11-04 16:23:24.137354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:05.563 [2024-11-04 16:23:24.137369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:05.563 [2024-11-04 16:23:24.137379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.563 [2024-11-04 16:23:24.137406] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:05.563 [2024-11-04 16:23:24.142150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.563 [2024-11-04 16:23:24.142199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:05.563 [2024-11-04 16:23:24.142211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.757 ms 00:29:05.563 [2024-11-04 16:23:24.142226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.563 [2024-11-04 16:23:24.142252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.563 [2024-11-04 16:23:24.142262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:05.563 [2024-11-04 16:23:24.142272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:05.563 [2024-11-04 16:23:24.142282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.563 [2024-11-04 16:23:24.142337] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:05.563 [2024-11-04 16:23:24.142359] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:05.563 [2024-11-04 16:23:24.142396] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:05.563 [2024-11-04 16:23:24.142412] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:05.563 [2024-11-04 16:23:24.142495] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:05.564 [2024-11-04 16:23:24.142509] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:05.564 [2024-11-04 16:23:24.142521] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:05.564 [2024-11-04 16:23:24.142533] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:05.564 [2024-11-04 16:23:24.142545] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:05.564 [2024-11-04 16:23:24.142559] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:05.564 [2024-11-04 16:23:24.142568] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:05.564 [2024-11-04 16:23:24.142577] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:05.564 [2024-11-04 16:23:24.142586] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:05.564 [2024-11-04 16:23:24.142596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.564 [2024-11-04 16:23:24.142605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:05.564 [2024-11-04 16:23:24.142626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:29:05.564 [2024-11-04 16:23:24.142635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.564 [2024-11-04 16:23:24.142703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.564 [2024-11-04 16:23:24.142714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:05.564 [2024-11-04 16:23:24.142724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:29:05.564 [2024-11-04 16:23:24.142736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.564 [2024-11-04 16:23:24.142837] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:05.564 [2024-11-04 16:23:24.142852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:05.564 [2024-11-04 16:23:24.142862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:05.564 [2024-11-04 16:23:24.142890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:05.564 [2024-11-04 16:23:24.142901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:05.564 [2024-11-04 16:23:24.142910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:05.564 [2024-11-04 16:23:24.142920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:05.564 [2024-11-04 16:23:24.142930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:05.564 [2024-11-04 16:23:24.142940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:05.564 [2024-11-04 16:23:24.142950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:05.564 [2024-11-04 16:23:24.142959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:05.564 [2024-11-04 16:23:24.142969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:05.564 [2024-11-04 16:23:24.142978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:05.564 [2024-11-04 16:23:24.142987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:05.564 [2024-11-04 16:23:24.142996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:05.564 [2024-11-04 16:23:24.143007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:05.564 [2024-11-04 16:23:24.143016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:05.564 [2024-11-04 16:23:24.143025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:05.564 [2024-11-04 16:23:24.143034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:05.564 [2024-11-04 16:23:24.143043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:05.564 [2024-11-04 16:23:24.143052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:05.564 [2024-11-04 16:23:24.143061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:05.564 [2024-11-04 16:23:24.143070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:05.564 [2024-11-04 16:23:24.143079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:05.564 [2024-11-04 16:23:24.143088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:05.564 [2024-11-04 16:23:24.143107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:05.564 [2024-11-04 16:23:24.143116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:05.564 [2024-11-04 16:23:24.143125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:05.564 [2024-11-04 16:23:24.143133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:05.564 [2024-11-04 16:23:24.143143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:05.564 [2024-11-04 16:23:24.143151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:05.564 [2024-11-04 16:23:24.143160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:05.564 [2024-11-04 16:23:24.143169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:05.564 [2024-11-04 16:23:24.143178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:05.564 [2024-11-04 16:23:24.143186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:05.564 [2024-11-04 16:23:24.143195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:05.564 [2024-11-04 16:23:24.143204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:05.564 [2024-11-04 16:23:24.143213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:05.564 [2024-11-04 16:23:24.143222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:05.564 [2024-11-04 16:23:24.143230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:05.564 [2024-11-04 16:23:24.143239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:05.564 [2024-11-04 16:23:24.143249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:05.564 [2024-11-04 16:23:24.143258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:05.564 [2024-11-04 16:23:24.143284] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:05.564 [2024-11-04 16:23:24.143294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:05.564 [2024-11-04 16:23:24.143304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:05.564 [2024-11-04 16:23:24.143313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:05.564 [2024-11-04 16:23:24.143327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:05.564 [2024-11-04 16:23:24.143336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:05.564 [2024-11-04 16:23:24.143346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:05.564 [2024-11-04 16:23:24.143355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:05.564 [2024-11-04 16:23:24.143364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:05.564 [2024-11-04 16:23:24.143373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:05.564 [2024-11-04 16:23:24.143384] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:05.564 [2024-11-04 16:23:24.143396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:05.564 [2024-11-04 16:23:24.143407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:05.564 [2024-11-04 16:23:24.143417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:05.564 [2024-11-04 16:23:24.143427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:05.564 [2024-11-04 16:23:24.143437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:05.564 [2024-11-04 16:23:24.143448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:05.564 [2024-11-04 16:23:24.143458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:05.564 [2024-11-04 16:23:24.143468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:05.564 [2024-11-04 16:23:24.143478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:05.564 [2024-11-04 16:23:24.143489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:05.564 [2024-11-04 16:23:24.143499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:05.564 [2024-11-04 16:23:24.143509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:05.564 [2024-11-04 16:23:24.143518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:05.564 [2024-11-04 16:23:24.143528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:05.564 [2024-11-04 16:23:24.143538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:05.564 [2024-11-04 16:23:24.143548] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:05.564 [2024-11-04 16:23:24.143560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:05.564 [2024-11-04 16:23:24.143571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:05.564 [2024-11-04 16:23:24.143580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:05.564 [2024-11-04 16:23:24.143591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:05.564 [2024-11-04 16:23:24.143601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:05.564 [2024-11-04 16:23:24.143612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:05.564 [2024-11-04 16:23:24.143622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:05.564 [2024-11-04 16:23:24.143632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.826 ms 00:29:05.564 [2024-11-04 16:23:24.143642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:05.564 [2024-11-04 16:23:24.143685] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:05.564 [2024-11-04 16:23:24.143699] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:09.758 [2024-11-04 16:23:27.647306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.758 [2024-11-04 16:23:27.647584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:09.758 [2024-11-04 16:23:27.647610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3509.308 ms 00:29:09.758 [2024-11-04 16:23:27.647622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.758 [2024-11-04 16:23:27.684506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.758 [2024-11-04 16:23:27.684550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:09.758 [2024-11-04 16:23:27.684565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.600 ms 00:29:09.758 [2024-11-04 16:23:27.684576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.758 [2024-11-04 16:23:27.684656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.758 [2024-11-04 16:23:27.684673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:09.758 [2024-11-04 16:23:27.684684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:09.758 [2024-11-04 16:23:27.684694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.758 [2024-11-04 16:23:27.730130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.758 [2024-11-04 16:23:27.730169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:09.758 [2024-11-04 16:23:27.730183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.468 ms 00:29:09.758 [2024-11-04 16:23:27.730198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.758 [2024-11-04 16:23:27.730231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.758 [2024-11-04 16:23:27.730241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:09.758 [2024-11-04 16:23:27.730253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:09.758 [2024-11-04 16:23:27.730263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.758 [2024-11-04 16:23:27.730778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.758 [2024-11-04 16:23:27.730794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:09.758 [2024-11-04 16:23:27.730806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.441 ms 00:29:09.758 [2024-11-04 16:23:27.730816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.758 [2024-11-04 16:23:27.730866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.758 [2024-11-04 16:23:27.730900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:09.758 [2024-11-04 16:23:27.730912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:09.758 [2024-11-04 16:23:27.730923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.758 [2024-11-04 16:23:27.751853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.758 [2024-11-04 16:23:27.751891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:09.758 [2024-11-04 16:23:27.751904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.941 ms 00:29:09.758 [2024-11-04 16:23:27.751915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.758 [2024-11-04 16:23:27.771117] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:09.758 [2024-11-04 16:23:27.771160] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:09.758 [2024-11-04 16:23:27.771177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.758 [2024-11-04 16:23:27.771188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:09.758 [2024-11-04 16:23:27.771199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.178 ms 00:29:09.758 [2024-11-04 16:23:27.771209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.758 [2024-11-04 16:23:27.790646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.758 [2024-11-04 16:23:27.790824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:09.758 [2024-11-04 16:23:27.790846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.423 ms 00:29:09.758 [2024-11-04 16:23:27.790857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.758 [2024-11-04 16:23:27.808196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.758 [2024-11-04 16:23:27.808232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:09.759 [2024-11-04 16:23:27.808245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.322 ms 00:29:09.759 [2024-11-04 16:23:27.808256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:27.825518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.759 [2024-11-04 16:23:27.825550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:09.759 [2024-11-04 16:23:27.825562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.249 ms 00:29:09.759 [2024-11-04 16:23:27.825572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:27.826370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.759 [2024-11-04 16:23:27.826406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:09.759 [2024-11-04 16:23:27.826419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.702 ms 00:29:09.759 [2024-11-04 16:23:27.826429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:27.916952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.759 [2024-11-04 16:23:27.917012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:09.759 [2024-11-04 16:23:27.917028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 90.645 ms 00:29:09.759 [2024-11-04 16:23:27.917038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:27.927103] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:09.759 [2024-11-04 16:23:27.928001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.759 [2024-11-04 16:23:27.928030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:09.759 [2024-11-04 16:23:27.928043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.933 ms 00:29:09.759 [2024-11-04 16:23:27.928054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:27.928138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.759 [2024-11-04 16:23:27.928156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:09.759 [2024-11-04 16:23:27.928167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:09.759 [2024-11-04 16:23:27.928178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:27.928240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.759 [2024-11-04 16:23:27.928253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:09.759 [2024-11-04 16:23:27.928264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:09.759 [2024-11-04 16:23:27.928274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:27.928297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.759 [2024-11-04 16:23:27.928309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:09.759 [2024-11-04 16:23:27.928320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:09.759 [2024-11-04 16:23:27.928333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:27.928369] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:09.759 [2024-11-04 16:23:27.928381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.759 [2024-11-04 16:23:27.928391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:09.759 [2024-11-04 16:23:27.928403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:09.759 [2024-11-04 16:23:27.928414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:27.962423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.759 [2024-11-04 16:23:27.962464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:09.759 [2024-11-04 16:23:27.962477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.033 ms 00:29:09.759 [2024-11-04 16:23:27.962488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:27.962560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.759 [2024-11-04 16:23:27.962571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:09.759 [2024-11-04 16:23:27.962582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:09.759 [2024-11-04 16:23:27.962592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:27.963705] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3860.628 ms, result 0 00:29:09.759 [2024-11-04 16:23:27.978759] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.759 [2024-11-04 16:23:27.994725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:09.759 [2024-11-04 16:23:28.003348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:09.759 16:23:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:09.759 16:23:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:29:09.759 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:09.759 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:09.759 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:09.759 [2024-11-04 16:23:28.238957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.759 [2024-11-04 16:23:28.238995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:09.759 [2024-11-04 16:23:28.239008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:09.759 [2024-11-04 16:23:28.239022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:28.239044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.759 [2024-11-04 16:23:28.239054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:09.759 [2024-11-04 16:23:28.239064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:09.759 [2024-11-04 16:23:28.239074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:28.239093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:09.759 [2024-11-04 16:23:28.239102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:09.759 [2024-11-04 16:23:28.239111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:09.759 [2024-11-04 16:23:28.239121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:09.759 [2024-11-04 16:23:28.239170] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.201 ms, result 0 00:29:09.759 true 00:29:09.759 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:09.759 { 00:29:09.759 "name": "ftl", 00:29:09.759 "properties": [ 00:29:09.759 { 00:29:09.759 "name": "superblock_version", 00:29:09.759 "value": 5, 00:29:09.759 "read-only": true 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "name": "base_device", 00:29:09.759 "bands": [ 00:29:09.759 { 00:29:09.759 "id": 0, 00:29:09.759 "state": "CLOSED", 00:29:09.759 "validity": 1.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 1, 00:29:09.759 "state": "CLOSED", 00:29:09.759 "validity": 1.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 2, 00:29:09.759 "state": "CLOSED", 00:29:09.759 "validity": 0.007843137254901933 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 3, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 4, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 5, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 6, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 7, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 8, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 9, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 10, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 11, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 12, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 13, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 14, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 15, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 16, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 17, 00:29:09.759 "state": "FREE", 00:29:09.759 "validity": 0.0 00:29:09.759 } 00:29:09.759 ], 00:29:09.759 "read-only": true 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "name": "cache_device", 00:29:09.759 "type": "bdev", 00:29:09.759 "chunks": [ 00:29:09.759 { 00:29:09.759 "id": 0, 00:29:09.759 "state": "INACTIVE", 00:29:09.759 "utilization": 0.0 00:29:09.759 }, 00:29:09.759 { 00:29:09.759 "id": 1, 00:29:09.759 "state": "OPEN", 00:29:09.759 "utilization": 0.0 00:29:09.760 }, 00:29:09.760 { 00:29:09.760 "id": 2, 00:29:09.760 "state": "OPEN", 00:29:09.760 "utilization": 0.0 00:29:09.760 }, 00:29:09.760 { 00:29:09.760 "id": 3, 00:29:09.760 "state": "FREE", 00:29:09.760 "utilization": 0.0 00:29:09.760 }, 00:29:09.760 { 00:29:09.760 "id": 4, 00:29:09.760 "state": "FREE", 00:29:09.760 "utilization": 0.0 00:29:09.760 } 00:29:09.760 ], 00:29:09.760 "read-only": true 00:29:09.760 }, 00:29:09.760 { 00:29:09.760 "name": "verbose_mode", 00:29:09.760 "value": true, 00:29:09.760 "unit": "", 00:29:09.760 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:09.760 }, 00:29:09.760 { 00:29:09.760 "name": "prep_upgrade_on_shutdown", 00:29:09.760 "value": false, 00:29:09.760 "unit": "", 00:29:09.760 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:09.760 } 00:29:09.760 ] 00:29:09.760 } 00:29:09.760 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:09.760 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:09.760 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:10.031 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:10.031 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:10.031 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:10.031 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:10.031 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:10.308 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:10.308 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:10.308 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:10.308 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:10.308 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:10.308 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:10.308 Validate MD5 checksum, iteration 1 00:29:10.308 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:10.308 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:10.308 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:10.308 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:10.308 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:10.308 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:10.308 16:23:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:10.308 [2024-11-04 16:23:28.951841] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:29:10.308 [2024-11-04 16:23:28.951955] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81398 ] 00:29:10.567 [2024-11-04 16:23:29.131275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.567 [2024-11-04 16:23:29.257415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.473  [2024-11-04T16:23:31.762Z] Copying: 653/1024 [MB] (653 MBps) [2024-11-04T16:23:33.140Z] Copying: 1024/1024 [MB] (average 646 MBps) 00:29:14.418 00:29:14.676 16:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:14.676 16:23:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:16.580 16:23:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:16.580 16:23:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a7da18d8e6443d13e5319ac5ae8f28e1 00:29:16.580 16:23:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a7da18d8e6443d13e5319ac5ae8f28e1 != \a\7\d\a\1\8\d\8\e\6\4\4\3\d\1\3\e\5\3\1\9\a\c\5\a\e\8\f\2\8\e\1 ]] 00:29:16.580 16:23:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:16.580 16:23:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:16.580 Validate MD5 checksum, iteration 2 00:29:16.580 16:23:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:16.580 16:23:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:16.580 16:23:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:16.580 16:23:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:16.580 16:23:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:16.580 16:23:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:16.580 16:23:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:16.580 [2024-11-04 16:23:34.927720] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:29:16.580 [2024-11-04 16:23:34.928273] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81461 ] 00:29:16.580 [2024-11-04 16:23:35.111104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.580 [2024-11-04 16:23:35.234881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.584  [2024-11-04T16:23:37.564Z] Copying: 654/1024 [MB] (654 MBps) [2024-11-04T16:23:38.942Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:29:20.220 00:29:20.220 16:23:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:20.220 16:23:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=afa82a5c7c867d8f22081b447c417fa2 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ afa82a5c7c867d8f22081b447c417fa2 != \a\f\a\8\2\a\5\c\7\c\8\6\7\d\8\f\2\2\0\8\1\b\4\4\7\c\4\1\7\f\a\2 ]] 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81318 ]] 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81318 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81528 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81528 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81528 ']' 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:22.122 16:23:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:22.122 [2024-11-04 16:23:40.536675] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:29:22.122 [2024-11-04 16:23:40.537022] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81528 ] 00:29:22.122 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 81318 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:22.122 [2024-11-04 16:23:40.727523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.122 [2024-11-04 16:23:40.832671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.059 [2024-11-04 16:23:41.764674] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:23.059 [2024-11-04 16:23:41.764738] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:23.320 [2024-11-04 16:23:41.910410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.320 [2024-11-04 16:23:41.910453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:23.320 [2024-11-04 16:23:41.910468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:23.320 [2024-11-04 16:23:41.910478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.320 [2024-11-04 16:23:41.910526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.320 [2024-11-04 16:23:41.910537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:23.320 [2024-11-04 16:23:41.910548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:23.320 [2024-11-04 16:23:41.910557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.320 [2024-11-04 16:23:41.910585] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:23.320 [2024-11-04 16:23:41.911569] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:23.320 [2024-11-04 16:23:41.911599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.320 [2024-11-04 16:23:41.911612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:23.320 [2024-11-04 16:23:41.911623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.026 ms 00:29:23.320 [2024-11-04 16:23:41.911634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.320 [2024-11-04 16:23:41.911985] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:23.320 [2024-11-04 16:23:41.934905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.320 [2024-11-04 16:23:41.934944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:23.320 [2024-11-04 16:23:41.934957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.958 ms 00:29:23.320 [2024-11-04 16:23:41.934967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.320 [2024-11-04 16:23:41.947900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.320 [2024-11-04 16:23:41.947936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:23.320 [2024-11-04 16:23:41.947951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:23.320 [2024-11-04 16:23:41.947961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.320 [2024-11-04 16:23:41.948396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.320 [2024-11-04 16:23:41.948411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:23.320 [2024-11-04 16:23:41.948421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.361 ms 00:29:23.320 [2024-11-04 16:23:41.948431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.320 [2024-11-04 16:23:41.948484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.320 [2024-11-04 16:23:41.948501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:23.320 [2024-11-04 16:23:41.948511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:23.320 [2024-11-04 16:23:41.948521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.320 [2024-11-04 16:23:41.948543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.320 [2024-11-04 16:23:41.948553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:23.320 [2024-11-04 16:23:41.948563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:23.320 [2024-11-04 16:23:41.948572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.320 [2024-11-04 16:23:41.948593] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:23.320 [2024-11-04 16:23:41.952632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.320 [2024-11-04 16:23:41.952662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:23.320 [2024-11-04 16:23:41.952673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.049 ms 00:29:23.320 [2024-11-04 16:23:41.952683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.320 [2024-11-04 16:23:41.952715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.320 [2024-11-04 16:23:41.952726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:23.320 [2024-11-04 16:23:41.952736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:23.320 [2024-11-04 16:23:41.952754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.320 [2024-11-04 16:23:41.952788] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:23.320 [2024-11-04 16:23:41.952809] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:23.320 [2024-11-04 16:23:41.952841] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:23.320 [2024-11-04 16:23:41.952861] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:23.320 [2024-11-04 16:23:41.952942] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:23.320 [2024-11-04 16:23:41.952955] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:23.320 [2024-11-04 16:23:41.952967] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:23.320 [2024-11-04 16:23:41.952981] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:23.320 [2024-11-04 16:23:41.952992] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:23.320 [2024-11-04 16:23:41.953004] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:23.320 [2024-11-04 16:23:41.953013] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:23.320 [2024-11-04 16:23:41.953022] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:23.320 [2024-11-04 16:23:41.953031] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:23.320 [2024-11-04 16:23:41.953041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.320 [2024-11-04 16:23:41.953053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:23.320 [2024-11-04 16:23:41.953063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:29:23.320 [2024-11-04 16:23:41.953072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.320 [2024-11-04 16:23:41.953138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.320 [2024-11-04 16:23:41.953148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:23.320 [2024-11-04 16:23:41.953159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:29:23.320 [2024-11-04 16:23:41.953168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.320 [2024-11-04 16:23:41.953245] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:23.320 [2024-11-04 16:23:41.953257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:23.320 [2024-11-04 16:23:41.953270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:23.320 [2024-11-04 16:23:41.953281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.320 [2024-11-04 16:23:41.953291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:23.320 [2024-11-04 16:23:41.953301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:23.320 [2024-11-04 16:23:41.953310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:23.320 [2024-11-04 16:23:41.953319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:23.320 [2024-11-04 16:23:41.953329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:23.320 [2024-11-04 16:23:41.953338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.320 [2024-11-04 16:23:41.953347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:23.320 [2024-11-04 16:23:41.953357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:23.320 [2024-11-04 16:23:41.953366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.320 [2024-11-04 16:23:41.953374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:23.320 [2024-11-04 16:23:41.953383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:23.320 [2024-11-04 16:23:41.953391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.320 [2024-11-04 16:23:41.953399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:23.320 [2024-11-04 16:23:41.953408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:23.320 [2024-11-04 16:23:41.953416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.320 [2024-11-04 16:23:41.953425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:23.320 [2024-11-04 16:23:41.953433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:23.320 [2024-11-04 16:23:41.953441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:23.320 [2024-11-04 16:23:41.953449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:23.320 [2024-11-04 16:23:41.953468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:23.321 [2024-11-04 16:23:41.953476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:23.321 [2024-11-04 16:23:41.953485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:23.321 [2024-11-04 16:23:41.953493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:23.321 [2024-11-04 16:23:41.953502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:23.321 [2024-11-04 16:23:41.953510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:23.321 [2024-11-04 16:23:41.953519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:23.321 [2024-11-04 16:23:41.953527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:23.321 [2024-11-04 16:23:41.953536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:23.321 [2024-11-04 16:23:41.953545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:23.321 [2024-11-04 16:23:41.953553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.321 [2024-11-04 16:23:41.953561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:23.321 [2024-11-04 16:23:41.953570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:23.321 [2024-11-04 16:23:41.953578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.321 [2024-11-04 16:23:41.953586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:23.321 [2024-11-04 16:23:41.953594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:23.321 [2024-11-04 16:23:41.953602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.321 [2024-11-04 16:23:41.953611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:23.321 [2024-11-04 16:23:41.953621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:23.321 [2024-11-04 16:23:41.953630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.321 [2024-11-04 16:23:41.953638] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:23.321 [2024-11-04 16:23:41.953648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:23.321 [2024-11-04 16:23:41.953657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:23.321 [2024-11-04 16:23:41.953665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.321 [2024-11-04 16:23:41.953675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:23.321 [2024-11-04 16:23:41.953684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:23.321 [2024-11-04 16:23:41.953692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:23.321 [2024-11-04 16:23:41.953701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:23.321 [2024-11-04 16:23:41.953709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:23.321 [2024-11-04 16:23:41.953718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:23.321 [2024-11-04 16:23:41.953729] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:23.321 [2024-11-04 16:23:41.953740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:23.321 [2024-11-04 16:23:41.954017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:23.321 [2024-11-04 16:23:41.954069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:23.321 [2024-11-04 16:23:41.954115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:23.321 [2024-11-04 16:23:41.954160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:23.321 [2024-11-04 16:23:41.954206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:23.321 [2024-11-04 16:23:41.954252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:23.321 [2024-11-04 16:23:41.954429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:23.321 [2024-11-04 16:23:41.954597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:23.321 [2024-11-04 16:23:41.954656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:23.321 [2024-11-04 16:23:41.954703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:23.321 [2024-11-04 16:23:41.954758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:23.321 [2024-11-04 16:23:41.954809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:23.321 [2024-11-04 16:23:41.954895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:23.321 [2024-11-04 16:23:41.954943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:23.321 [2024-11-04 16:23:41.954988] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:23.321 [2024-11-04 16:23:41.955036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:23.321 [2024-11-04 16:23:41.955082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:23.321 [2024-11-04 16:23:41.955161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:23.321 [2024-11-04 16:23:41.955216] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:23.321 [2024-11-04 16:23:41.955246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:23.321 [2024-11-04 16:23:41.955259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.321 [2024-11-04 16:23:41.955275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:23.321 [2024-11-04 16:23:41.955286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.067 ms 00:29:23.321 [2024-11-04 16:23:41.955297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.321 [2024-11-04 16:23:41.990402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.321 [2024-11-04 16:23:41.990573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:23.321 [2024-11-04 16:23:41.990594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.101 ms 00:29:23.321 [2024-11-04 16:23:41.990605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.321 [2024-11-04 16:23:41.990652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.321 [2024-11-04 16:23:41.990664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:23.321 [2024-11-04 16:23:41.990674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:23.321 [2024-11-04 16:23:41.990685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.321 [2024-11-04 16:23:42.035351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.321 [2024-11-04 16:23:42.035385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:23.321 [2024-11-04 16:23:42.035398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.682 ms 00:29:23.321 [2024-11-04 16:23:42.035408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.321 [2024-11-04 16:23:42.035443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.321 [2024-11-04 16:23:42.035453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:23.321 [2024-11-04 16:23:42.035463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:23.321 [2024-11-04 16:23:42.035473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.321 [2024-11-04 16:23:42.035591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.321 [2024-11-04 16:23:42.035604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:23.321 [2024-11-04 16:23:42.035615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:29:23.321 [2024-11-04 16:23:42.035625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.321 [2024-11-04 16:23:42.035662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.321 [2024-11-04 16:23:42.035673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:23.321 [2024-11-04 16:23:42.035683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:23.321 [2024-11-04 16:23:42.035692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.581 [2024-11-04 16:23:42.055796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.581 [2024-11-04 16:23:42.055826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:23.581 [2024-11-04 16:23:42.055839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.112 ms 00:29:23.581 [2024-11-04 16:23:42.055850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.581 [2024-11-04 16:23:42.055955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.581 [2024-11-04 16:23:42.055970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:23.581 [2024-11-04 16:23:42.055982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:23.581 [2024-11-04 16:23:42.056000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.581 [2024-11-04 16:23:42.089764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.581 [2024-11-04 16:23:42.089796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:23.581 [2024-11-04 16:23:42.089810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.800 ms 00:29:23.581 [2024-11-04 16:23:42.089820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.581 [2024-11-04 16:23:42.103505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.581 [2024-11-04 16:23:42.103541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:23.581 [2024-11-04 16:23:42.103562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.612 ms 00:29:23.581 [2024-11-04 16:23:42.103572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.581 [2024-11-04 16:23:42.184539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.581 [2024-11-04 16:23:42.184588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:23.581 [2024-11-04 16:23:42.184611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 81.040 ms 00:29:23.581 [2024-11-04 16:23:42.184622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.581 [2024-11-04 16:23:42.184796] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:23.581 [2024-11-04 16:23:42.184922] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:23.581 [2024-11-04 16:23:42.185033] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:23.581 [2024-11-04 16:23:42.185140] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:23.581 [2024-11-04 16:23:42.185154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.581 [2024-11-04 16:23:42.185165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:23.581 [2024-11-04 16:23:42.185177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.486 ms 00:29:23.581 [2024-11-04 16:23:42.185187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.581 [2024-11-04 16:23:42.185274] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:23.581 [2024-11-04 16:23:42.185290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.581 [2024-11-04 16:23:42.185304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:23.581 [2024-11-04 16:23:42.185315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:23.581 [2024-11-04 16:23:42.185326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.581 [2024-11-04 16:23:42.207559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.581 [2024-11-04 16:23:42.207601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:23.581 [2024-11-04 16:23:42.207615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.215 ms 00:29:23.581 [2024-11-04 16:23:42.207625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.581 [2024-11-04 16:23:42.221009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.581 [2024-11-04 16:23:42.221043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:23.581 [2024-11-04 16:23:42.221055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:23.581 [2024-11-04 16:23:42.221065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.581 [2024-11-04 16:23:42.221148] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:23.581 [2024-11-04 16:23:42.221336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.581 [2024-11-04 16:23:42.221351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:23.581 [2024-11-04 16:23:42.221362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:29:23.581 [2024-11-04 16:23:42.221372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.150 [2024-11-04 16:23:42.795552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.150 [2024-11-04 16:23:42.795619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:24.150 [2024-11-04 16:23:42.795638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 573.992 ms 00:29:24.150 [2024-11-04 16:23:42.795649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.150 [2024-11-04 16:23:42.801333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.150 [2024-11-04 16:23:42.801485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:24.150 [2024-11-04 16:23:42.801509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.266 ms 00:29:24.150 [2024-11-04 16:23:42.801521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.150 [2024-11-04 16:23:42.802005] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:24.150 [2024-11-04 16:23:42.802030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.150 [2024-11-04 16:23:42.802042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:24.150 [2024-11-04 16:23:42.802053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.463 ms 00:29:24.150 [2024-11-04 16:23:42.802064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.150 [2024-11-04 16:23:42.802094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.150 [2024-11-04 16:23:42.802106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:24.150 [2024-11-04 16:23:42.802118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:24.150 [2024-11-04 16:23:42.802128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.150 [2024-11-04 16:23:42.802169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 581.963 ms, result 0 00:29:24.150 [2024-11-04 16:23:42.802212] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:24.150 [2024-11-04 16:23:42.802285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.150 [2024-11-04 16:23:42.802297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:24.150 [2024-11-04 16:23:42.802307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:29:24.150 [2024-11-04 16:23:42.802316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.718 [2024-11-04 16:23:43.369535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.718 [2024-11-04 16:23:43.369597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:24.718 [2024-11-04 16:23:43.369614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 567.004 ms 00:29:24.718 [2024-11-04 16:23:43.369625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.718 [2024-11-04 16:23:43.375380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.718 [2024-11-04 16:23:43.375421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:24.718 [2024-11-04 16:23:43.375434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.342 ms 00:29:24.718 [2024-11-04 16:23:43.375445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.718 [2024-11-04 16:23:43.375944] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:24.718 [2024-11-04 16:23:43.375967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.718 [2024-11-04 16:23:43.375978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:24.718 [2024-11-04 16:23:43.375990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.494 ms 00:29:24.718 [2024-11-04 16:23:43.376001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.718 [2024-11-04 16:23:43.376033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.718 [2024-11-04 16:23:43.376045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:24.718 [2024-11-04 16:23:43.376057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:24.718 [2024-11-04 16:23:43.376068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.718 [2024-11-04 16:23:43.376105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 574.824 ms, result 0 00:29:24.718 [2024-11-04 16:23:43.376148] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:24.718 [2024-11-04 16:23:43.376161] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:24.718 [2024-11-04 16:23:43.376174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.719 [2024-11-04 16:23:43.376186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:24.719 [2024-11-04 16:23:43.376198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1156.919 ms 00:29:24.719 [2024-11-04 16:23:43.376208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.719 [2024-11-04 16:23:43.376237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.719 [2024-11-04 16:23:43.376250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:24.719 [2024-11-04 16:23:43.376265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:24.719 [2024-11-04 16:23:43.376276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.719 [2024-11-04 16:23:43.387294] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:24.719 [2024-11-04 16:23:43.387559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.719 [2024-11-04 16:23:43.387606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:24.719 [2024-11-04 16:23:43.387690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.283 ms 00:29:24.719 [2024-11-04 16:23:43.387724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.719 [2024-11-04 16:23:43.388388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.719 [2024-11-04 16:23:43.388516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:24.719 [2024-11-04 16:23:43.388621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.513 ms 00:29:24.719 [2024-11-04 16:23:43.388660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.719 [2024-11-04 16:23:43.390717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.719 [2024-11-04 16:23:43.390842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:24.719 [2024-11-04 16:23:43.390920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.014 ms 00:29:24.719 [2024-11-04 16:23:43.390956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.719 [2024-11-04 16:23:43.391032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.719 [2024-11-04 16:23:43.391069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:24.719 [2024-11-04 16:23:43.391100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:24.719 [2024-11-04 16:23:43.391258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.719 [2024-11-04 16:23:43.391441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.719 [2024-11-04 16:23:43.391475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:24.719 [2024-11-04 16:23:43.391506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:24.719 [2024-11-04 16:23:43.391535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.719 [2024-11-04 16:23:43.391576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.719 [2024-11-04 16:23:43.391684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:24.719 [2024-11-04 16:23:43.391774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:24.719 [2024-11-04 16:23:43.391822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.719 [2024-11-04 16:23:43.391876] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:24.719 [2024-11-04 16:23:43.391915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.719 [2024-11-04 16:23:43.391946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:24.719 [2024-11-04 16:23:43.391976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:29:24.719 [2024-11-04 16:23:43.392071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.719 [2024-11-04 16:23:43.392157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:24.719 [2024-11-04 16:23:43.392171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:24.719 [2024-11-04 16:23:43.392183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:29:24.719 [2024-11-04 16:23:43.392194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:24.719 [2024-11-04 16:23:43.393248] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1484.673 ms, result 0 00:29:24.719 [2024-11-04 16:23:43.408106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.719 [2024-11-04 16:23:43.424076] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:24.719 [2024-11-04 16:23:43.433702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:24.978 Validate MD5 checksum, iteration 1 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:24.978 16:23:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:24.978 [2024-11-04 16:23:43.571297] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:29:24.978 [2024-11-04 16:23:43.571728] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81563 ] 00:29:25.237 [2024-11-04 16:23:43.749742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.237 [2024-11-04 16:23:43.866228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.138  [2024-11-04T16:23:46.118Z] Copying: 660/1024 [MB] (660 MBps) [2024-11-04T16:23:48.018Z] Copying: 1024/1024 [MB] (average 654 MBps) 00:29:29.296 00:29:29.296 16:23:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:29.296 16:23:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:30.676 16:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:30.676 Validate MD5 checksum, iteration 2 00:29:30.676 16:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a7da18d8e6443d13e5319ac5ae8f28e1 00:29:30.676 16:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a7da18d8e6443d13e5319ac5ae8f28e1 != \a\7\d\a\1\8\d\8\e\6\4\4\3\d\1\3\e\5\3\1\9\a\c\5\a\e\8\f\2\8\e\1 ]] 00:29:30.676 16:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:30.676 16:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:30.676 16:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:30.676 16:23:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:30.676 16:23:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:30.676 16:23:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:30.676 16:23:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:30.676 16:23:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:30.676 16:23:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:30.934 [2024-11-04 16:23:49.450996] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:29:30.934 [2024-11-04 16:23:49.451270] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81625 ] 00:29:30.934 [2024-11-04 16:23:49.628469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.192 [2024-11-04 16:23:49.756229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.092  [2024-11-04T16:23:52.073Z] Copying: 665/1024 [MB] (665 MBps) [2024-11-04T16:23:55.360Z] Copying: 1024/1024 [MB] (average 658 MBps) 00:29:36.638 00:29:36.896 16:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:36.896 16:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=afa82a5c7c867d8f22081b447c417fa2 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ afa82a5c7c867d8f22081b447c417fa2 != \a\f\a\8\2\a\5\c\7\c\8\6\7\d\8\f\2\2\0\8\1\b\4\4\7\c\4\1\7\f\a\2 ]] 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81528 ]] 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81528 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81528 ']' 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81528 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81528 00:29:38.797 killing process with pid 81528 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81528' 00:29:38.797 16:23:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81528 00:29:38.798 16:23:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81528 00:29:39.735 [2024-11-04 16:23:58.258182] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:39.735 [2024-11-04 16:23:58.278151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.735 [2024-11-04 16:23:58.278194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:39.735 [2024-11-04 16:23:58.278208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:39.735 [2024-11-04 16:23:58.278235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.735 [2024-11-04 16:23:58.278256] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:39.735 [2024-11-04 16:23:58.282390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.735 [2024-11-04 16:23:58.282421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:39.735 [2024-11-04 16:23:58.282433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.124 ms 00:29:39.735 [2024-11-04 16:23:58.282464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.735 [2024-11-04 16:23:58.282669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.735 [2024-11-04 16:23:58.282683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:39.735 [2024-11-04 16:23:58.282694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:29:39.735 [2024-11-04 16:23:58.282704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.735 [2024-11-04 16:23:58.284134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.735 [2024-11-04 16:23:58.284173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:39.735 [2024-11-04 16:23:58.284186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.413 ms 00:29:39.735 [2024-11-04 16:23:58.284196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.735 [2024-11-04 16:23:58.285136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.736 [2024-11-04 16:23:58.285211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:39.736 [2024-11-04 16:23:58.285229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.900 ms 00:29:39.736 [2024-11-04 16:23:58.285239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.299526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.736 [2024-11-04 16:23:58.299680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:39.736 [2024-11-04 16:23:58.299702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.272 ms 00:29:39.736 [2024-11-04 16:23:58.299735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.307439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.736 [2024-11-04 16:23:58.307479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:39.736 [2024-11-04 16:23:58.307492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.665 ms 00:29:39.736 [2024-11-04 16:23:58.307503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.307589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.736 [2024-11-04 16:23:58.307602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:39.736 [2024-11-04 16:23:58.307613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:29:39.736 [2024-11-04 16:23:58.307623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.321744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.736 [2024-11-04 16:23:58.321783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:39.736 [2024-11-04 16:23:58.321795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.121 ms 00:29:39.736 [2024-11-04 16:23:58.321820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.336247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.736 [2024-11-04 16:23:58.336377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:39.736 [2024-11-04 16:23:58.336396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.415 ms 00:29:39.736 [2024-11-04 16:23:58.336422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.350387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.736 [2024-11-04 16:23:58.350421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:39.736 [2024-11-04 16:23:58.350434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.927 ms 00:29:39.736 [2024-11-04 16:23:58.350444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.364344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.736 [2024-11-04 16:23:58.364469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:39.736 [2024-11-04 16:23:58.364616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.858 ms 00:29:39.736 [2024-11-04 16:23:58.364654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.364731] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:39.736 [2024-11-04 16:23:58.364805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:39.736 [2024-11-04 16:23:58.364848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:39.736 [2024-11-04 16:23:58.364859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:39.736 [2024-11-04 16:23:58.364871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.364883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.364894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.364905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.364916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.364928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.364938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.364949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.364959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.364969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.364979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.364990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.365000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.365010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.365020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:39.736 [2024-11-04 16:23:58.365033] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:39.736 [2024-11-04 16:23:58.365042] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f7e20e0b-2d25-4b61-a652-08f699c4c408 00:29:39.736 [2024-11-04 16:23:58.365053] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:39.736 [2024-11-04 16:23:58.365063] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:39.736 [2024-11-04 16:23:58.365073] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:39.736 [2024-11-04 16:23:58.365083] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:39.736 [2024-11-04 16:23:58.365093] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:39.736 [2024-11-04 16:23:58.365103] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:39.736 [2024-11-04 16:23:58.365113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:39.736 [2024-11-04 16:23:58.365122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:39.736 [2024-11-04 16:23:58.365131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:39.736 [2024-11-04 16:23:58.365142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.736 [2024-11-04 16:23:58.365158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:39.736 [2024-11-04 16:23:58.365171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.412 ms 00:29:39.736 [2024-11-04 16:23:58.365183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.383762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.736 [2024-11-04 16:23:58.383913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:39.736 [2024-11-04 16:23:58.384014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.574 ms 00:29:39.736 [2024-11-04 16:23:58.384051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.384590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.736 [2024-11-04 16:23:58.384633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:39.736 [2024-11-04 16:23:58.384718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.496 ms 00:29:39.736 [2024-11-04 16:23:58.384765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.445490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:39.736 [2024-11-04 16:23:58.445613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:39.736 [2024-11-04 16:23:58.445705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:39.736 [2024-11-04 16:23:58.445741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.445815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:39.736 [2024-11-04 16:23:58.445848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:39.736 [2024-11-04 16:23:58.445878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:39.736 [2024-11-04 16:23:58.445907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.446002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:39.736 [2024-11-04 16:23:58.446122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:39.736 [2024-11-04 16:23:58.446179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:39.736 [2024-11-04 16:23:58.446209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.736 [2024-11-04 16:23:58.446249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:39.736 [2024-11-04 16:23:58.446287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:39.736 [2024-11-04 16:23:58.446317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:39.736 [2024-11-04 16:23:58.446347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.996 [2024-11-04 16:23:58.562593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:39.996 [2024-11-04 16:23:58.562816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:39.996 [2024-11-04 16:23:58.562949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:39.996 [2024-11-04 16:23:58.562996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.996 [2024-11-04 16:23:58.656544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:39.996 [2024-11-04 16:23:58.656727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:39.996 [2024-11-04 16:23:58.656895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:39.996 [2024-11-04 16:23:58.656933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.996 [2024-11-04 16:23:58.657053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:39.996 [2024-11-04 16:23:58.657089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:39.996 [2024-11-04 16:23:58.657177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:39.996 [2024-11-04 16:23:58.657211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.996 [2024-11-04 16:23:58.657289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:39.996 [2024-11-04 16:23:58.657323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:39.996 [2024-11-04 16:23:58.657360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:39.996 [2024-11-04 16:23:58.657443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.996 [2024-11-04 16:23:58.657587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:39.996 [2024-11-04 16:23:58.657641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:39.996 [2024-11-04 16:23:58.657698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:39.996 [2024-11-04 16:23:58.657728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.996 [2024-11-04 16:23:58.657810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:39.996 [2024-11-04 16:23:58.657847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:39.996 [2024-11-04 16:23:58.657878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:39.996 [2024-11-04 16:23:58.657912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.996 [2024-11-04 16:23:58.657958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:39.996 [2024-11-04 16:23:58.657970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:39.996 [2024-11-04 16:23:58.657981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:39.996 [2024-11-04 16:23:58.657991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.996 [2024-11-04 16:23:58.658032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:39.996 [2024-11-04 16:23:58.658045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:39.996 [2024-11-04 16:23:58.658059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:39.996 [2024-11-04 16:23:58.658069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.996 [2024-11-04 16:23:58.658183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 380.616 ms, result 0 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:41.371 Remove shared memory files 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81318 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:41.371 ************************************ 00:29:41.371 END TEST ftl_upgrade_shutdown 00:29:41.371 ************************************ 00:29:41.371 00:29:41.371 real 1m25.963s 00:29:41.371 user 1m55.514s 00:29:41.371 sys 0m24.404s 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:41.371 16:23:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:41.371 16:23:59 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:29:41.371 16:23:59 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:41.371 Process with pid 73988 is not found 00:29:41.371 16:23:59 ftl -- ftl/ftl.sh@14 -- # killprocess 73988 00:29:41.371 16:23:59 ftl -- common/autotest_common.sh@952 -- # '[' -z 73988 ']' 00:29:41.371 16:23:59 ftl -- common/autotest_common.sh@956 -- # kill -0 73988 00:29:41.371 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (73988) - No such process 00:29:41.371 16:23:59 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 73988 is not found' 00:29:41.371 16:23:59 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:29:41.371 16:23:59 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81776 00:29:41.371 16:23:59 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:41.371 16:23:59 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81776 00:29:41.371 16:23:59 ftl -- common/autotest_common.sh@833 -- # '[' -z 81776 ']' 00:29:41.371 16:23:59 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.371 16:23:59 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:41.371 16:23:59 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.371 16:23:59 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:41.371 16:23:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:41.371 [2024-11-04 16:24:00.069293] Starting SPDK v25.01-pre git sha1 61de1ff17 / DPDK 24.03.0 initialization... 00:29:41.371 [2024-11-04 16:24:00.069431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81776 ] 00:29:41.630 [2024-11-04 16:24:00.254678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.888 [2024-11-04 16:24:00.364538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.824 16:24:01 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:42.824 16:24:01 ftl -- common/autotest_common.sh@866 -- # return 0 00:29:42.824 16:24:01 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:42.824 nvme0n1 00:29:42.824 16:24:01 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:29:42.824 16:24:01 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:42.824 16:24:01 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:43.082 16:24:01 ftl -- ftl/common.sh@28 -- # stores=d6cd4f8c-9cdf-4ef2-9b7d-70a65afc9ccd 00:29:43.082 16:24:01 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:29:43.082 16:24:01 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d6cd4f8c-9cdf-4ef2-9b7d-70a65afc9ccd 00:29:43.340 16:24:01 ftl -- ftl/ftl.sh@23 -- # killprocess 81776 00:29:43.340 16:24:01 ftl -- common/autotest_common.sh@952 -- # '[' -z 81776 ']' 00:29:43.340 16:24:01 ftl -- common/autotest_common.sh@956 -- # kill -0 81776 00:29:43.340 16:24:01 ftl -- common/autotest_common.sh@957 -- # uname 00:29:43.340 16:24:01 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:43.340 16:24:01 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81776 00:29:43.340 killing process with pid 81776 00:29:43.340 16:24:01 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:43.340 16:24:01 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:43.340 16:24:01 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81776' 00:29:43.340 16:24:01 ftl -- common/autotest_common.sh@971 -- # kill 81776 00:29:43.340 16:24:01 ftl -- common/autotest_common.sh@976 -- # wait 81776 00:29:45.871 16:24:04 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:45.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:45.871 Waiting for block devices as requested 00:29:46.129 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:46.129 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:46.129 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:46.388 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:51.694 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:51.694 Remove shared memory files 00:29:51.694 16:24:10 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:29:51.694 16:24:10 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:51.694 16:24:10 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:29:51.694 16:24:10 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:29:51.694 16:24:10 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:29:51.694 16:24:10 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:51.694 16:24:10 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:29:51.694 ************************************ 00:29:51.694 END TEST ftl 00:29:51.694 ************************************ 00:29:51.694 00:29:51.694 real 11m26.632s 00:29:51.694 user 13m47.288s 00:29:51.694 sys 1m32.484s 00:29:51.694 16:24:10 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:51.694 16:24:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:51.694 16:24:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:51.694 16:24:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:51.694 16:24:10 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:51.694 16:24:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:51.694 16:24:10 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:29:51.694 16:24:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:51.694 16:24:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:51.694 16:24:10 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:29:51.694 16:24:10 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:29:51.694 16:24:10 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:29:51.694 16:24:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:51.694 16:24:10 -- common/autotest_common.sh@10 -- # set +x 00:29:51.694 16:24:10 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:29:51.694 16:24:10 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:29:51.694 16:24:10 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:29:51.694 16:24:10 -- common/autotest_common.sh@10 -- # set +x 00:29:54.229 INFO: APP EXITING 00:29:54.229 INFO: killing all VMs 00:29:54.229 INFO: killing vhost app 00:29:54.229 INFO: EXIT DONE 00:29:54.488 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:54.747 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:55.005 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:55.005 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:29:55.005 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:29:55.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:55.833 Cleaning 00:29:55.833 Removing: /var/run/dpdk/spdk0/config 00:29:55.833 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:55.833 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:55.833 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:55.833 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:55.833 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:55.833 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:55.834 Removing: /var/run/dpdk/spdk0 00:29:55.834 Removing: /var/run/dpdk/spdk_pid57548 00:29:55.834 Removing: /var/run/dpdk/spdk_pid57783 00:29:55.834 Removing: /var/run/dpdk/spdk_pid58018 00:29:55.834 Removing: /var/run/dpdk/spdk_pid58128 00:29:55.834 Removing: /var/run/dpdk/spdk_pid58183 00:29:55.834 Removing: /var/run/dpdk/spdk_pid58312 00:29:55.834 Removing: /var/run/dpdk/spdk_pid58341 00:29:55.834 Removing: /var/run/dpdk/spdk_pid58551 00:29:55.834 Removing: /var/run/dpdk/spdk_pid58669 00:29:55.834 Removing: /var/run/dpdk/spdk_pid58782 00:29:55.834 Removing: /var/run/dpdk/spdk_pid58912 00:29:55.834 Removing: /var/run/dpdk/spdk_pid59026 00:29:55.834 Removing: /var/run/dpdk/spdk_pid59071 00:29:55.834 Removing: /var/run/dpdk/spdk_pid59104 00:29:56.093 Removing: /var/run/dpdk/spdk_pid59178 00:29:56.093 Removing: /var/run/dpdk/spdk_pid59295 00:29:56.093 Removing: /var/run/dpdk/spdk_pid59761 00:29:56.093 Removing: /var/run/dpdk/spdk_pid59836 00:29:56.093 Removing: /var/run/dpdk/spdk_pid59918 00:29:56.093 Removing: /var/run/dpdk/spdk_pid59940 00:29:56.093 Removing: /var/run/dpdk/spdk_pid60099 00:29:56.093 Removing: /var/run/dpdk/spdk_pid60126 00:29:56.093 Removing: /var/run/dpdk/spdk_pid60288 00:29:56.093 Removing: /var/run/dpdk/spdk_pid60310 00:29:56.093 Removing: /var/run/dpdk/spdk_pid60385 00:29:56.093 Removing: /var/run/dpdk/spdk_pid60403 00:29:56.093 Removing: /var/run/dpdk/spdk_pid60467 00:29:56.093 Removing: /var/run/dpdk/spdk_pid60485 00:29:56.093 Removing: /var/run/dpdk/spdk_pid60686 00:29:56.093 Removing: /var/run/dpdk/spdk_pid60722 00:29:56.093 Removing: /var/run/dpdk/spdk_pid60811 00:29:56.093 Removing: /var/run/dpdk/spdk_pid61005 00:29:56.093 Removing: /var/run/dpdk/spdk_pid61106 00:29:56.093 Removing: /var/run/dpdk/spdk_pid61148 00:29:56.093 Removing: /var/run/dpdk/spdk_pid61602 00:29:56.093 Removing: /var/run/dpdk/spdk_pid61700 00:29:56.093 Removing: /var/run/dpdk/spdk_pid61826 00:29:56.093 Removing: /var/run/dpdk/spdk_pid61879 00:29:56.093 Removing: /var/run/dpdk/spdk_pid61905 00:29:56.093 Removing: /var/run/dpdk/spdk_pid61989 00:29:56.093 Removing: /var/run/dpdk/spdk_pid62637 00:29:56.093 Removing: /var/run/dpdk/spdk_pid62680 00:29:56.093 Removing: /var/run/dpdk/spdk_pid63178 00:29:56.093 Removing: /var/run/dpdk/spdk_pid63276 00:29:56.093 Removing: /var/run/dpdk/spdk_pid63396 00:29:56.093 Removing: /var/run/dpdk/spdk_pid63455 00:29:56.093 Removing: /var/run/dpdk/spdk_pid63480 00:29:56.093 Removing: /var/run/dpdk/spdk_pid63511 00:29:56.093 Removing: /var/run/dpdk/spdk_pid65408 00:29:56.093 Removing: /var/run/dpdk/spdk_pid65551 00:29:56.093 Removing: /var/run/dpdk/spdk_pid65560 00:29:56.093 Removing: /var/run/dpdk/spdk_pid65578 00:29:56.093 Removing: /var/run/dpdk/spdk_pid65618 00:29:56.093 Removing: /var/run/dpdk/spdk_pid65622 00:29:56.093 Removing: /var/run/dpdk/spdk_pid65634 00:29:56.093 Removing: /var/run/dpdk/spdk_pid65679 00:29:56.093 Removing: /var/run/dpdk/spdk_pid65683 00:29:56.093 Removing: /var/run/dpdk/spdk_pid65695 00:29:56.093 Removing: /var/run/dpdk/spdk_pid65741 00:29:56.093 Removing: /var/run/dpdk/spdk_pid65749 00:29:56.093 Removing: /var/run/dpdk/spdk_pid65762 00:29:56.093 Removing: /var/run/dpdk/spdk_pid67154 00:29:56.093 Removing: /var/run/dpdk/spdk_pid67266 00:29:56.093 Removing: /var/run/dpdk/spdk_pid68699 00:29:56.093 Removing: /var/run/dpdk/spdk_pid70075 00:29:56.093 Removing: /var/run/dpdk/spdk_pid70180 00:29:56.093 Removing: /var/run/dpdk/spdk_pid70294 00:29:56.093 Removing: /var/run/dpdk/spdk_pid70403 00:29:56.352 Removing: /var/run/dpdk/spdk_pid70529 00:29:56.352 Removing: /var/run/dpdk/spdk_pid70610 00:29:56.353 Removing: /var/run/dpdk/spdk_pid70764 00:29:56.353 Removing: /var/run/dpdk/spdk_pid71140 00:29:56.353 Removing: /var/run/dpdk/spdk_pid71182 00:29:56.353 Removing: /var/run/dpdk/spdk_pid71641 00:29:56.353 Removing: /var/run/dpdk/spdk_pid71825 00:29:56.353 Removing: /var/run/dpdk/spdk_pid71930 00:29:56.353 Removing: /var/run/dpdk/spdk_pid72041 00:29:56.353 Removing: /var/run/dpdk/spdk_pid72100 00:29:56.353 Removing: /var/run/dpdk/spdk_pid72126 00:29:56.353 Removing: /var/run/dpdk/spdk_pid72429 00:29:56.353 Removing: /var/run/dpdk/spdk_pid72495 00:29:56.353 Removing: /var/run/dpdk/spdk_pid72586 00:29:56.353 Removing: /var/run/dpdk/spdk_pid73027 00:29:56.353 Removing: /var/run/dpdk/spdk_pid73176 00:29:56.353 Removing: /var/run/dpdk/spdk_pid73988 00:29:56.353 Removing: /var/run/dpdk/spdk_pid74132 00:29:56.353 Removing: /var/run/dpdk/spdk_pid74357 00:29:56.353 Removing: /var/run/dpdk/spdk_pid74465 00:29:56.353 Removing: /var/run/dpdk/spdk_pid74773 00:29:56.353 Removing: /var/run/dpdk/spdk_pid75032 00:29:56.353 Removing: /var/run/dpdk/spdk_pid75384 00:29:56.353 Removing: /var/run/dpdk/spdk_pid75583 00:29:56.353 Removing: /var/run/dpdk/spdk_pid75729 00:29:56.353 Removing: /var/run/dpdk/spdk_pid75793 00:29:56.353 Removing: /var/run/dpdk/spdk_pid75942 00:29:56.353 Removing: /var/run/dpdk/spdk_pid75978 00:29:56.353 Removing: /var/run/dpdk/spdk_pid76036 00:29:56.353 Removing: /var/run/dpdk/spdk_pid76259 00:29:56.353 Removing: /var/run/dpdk/spdk_pid76499 00:29:56.353 Removing: /var/run/dpdk/spdk_pid76974 00:29:56.353 Removing: /var/run/dpdk/spdk_pid77453 00:29:56.353 Removing: /var/run/dpdk/spdk_pid77916 00:29:56.353 Removing: /var/run/dpdk/spdk_pid78438 00:29:56.353 Removing: /var/run/dpdk/spdk_pid78584 00:29:56.353 Removing: /var/run/dpdk/spdk_pid78673 00:29:56.353 Removing: /var/run/dpdk/spdk_pid79292 00:29:56.353 Removing: /var/run/dpdk/spdk_pid79361 00:29:56.353 Removing: /var/run/dpdk/spdk_pid79851 00:29:56.353 Removing: /var/run/dpdk/spdk_pid80237 00:29:56.353 Removing: /var/run/dpdk/spdk_pid80764 00:29:56.353 Removing: /var/run/dpdk/spdk_pid80887 00:29:56.353 Removing: /var/run/dpdk/spdk_pid80942 00:29:56.353 Removing: /var/run/dpdk/spdk_pid81000 00:29:56.353 Removing: /var/run/dpdk/spdk_pid81056 00:29:56.353 Removing: /var/run/dpdk/spdk_pid81121 00:29:56.353 Removing: /var/run/dpdk/spdk_pid81318 00:29:56.353 Removing: /var/run/dpdk/spdk_pid81398 00:29:56.353 Removing: /var/run/dpdk/spdk_pid81461 00:29:56.353 Removing: /var/run/dpdk/spdk_pid81528 00:29:56.353 Removing: /var/run/dpdk/spdk_pid81563 00:29:56.353 Removing: /var/run/dpdk/spdk_pid81625 00:29:56.353 Removing: /var/run/dpdk/spdk_pid81776 00:29:56.353 Clean 00:29:56.612 16:24:15 -- common/autotest_common.sh@1451 -- # return 0 00:29:56.612 16:24:15 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:29:56.612 16:24:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:56.612 16:24:15 -- common/autotest_common.sh@10 -- # set +x 00:29:56.612 16:24:15 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:29:56.612 16:24:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:56.612 16:24:15 -- common/autotest_common.sh@10 -- # set +x 00:29:56.612 16:24:15 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:56.612 16:24:15 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:56.612 16:24:15 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:56.612 16:24:15 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:29:56.612 16:24:15 -- spdk/autotest.sh@394 -- # hostname 00:29:56.612 16:24:15 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:56.871 geninfo: WARNING: invalid characters removed from testname! 00:30:23.424 16:24:40 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:24.801 16:24:43 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:27.349 16:24:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:29.255 16:24:47 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:31.162 16:24:49 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:33.700 16:24:51 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:35.607 16:24:53 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:35.607 16:24:53 -- spdk/autorun.sh@1 -- $ timing_finish 00:30:35.607 16:24:53 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:30:35.607 16:24:53 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:35.607 16:24:53 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:35.607 16:24:53 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:35.607 + [[ -n 5243 ]] 00:30:35.607 + sudo kill 5243 00:30:35.616 [Pipeline] } 00:30:35.632 [Pipeline] // timeout 00:30:35.637 [Pipeline] } 00:30:35.651 [Pipeline] // stage 00:30:35.655 [Pipeline] } 00:30:35.669 [Pipeline] // catchError 00:30:35.678 [Pipeline] stage 00:30:35.680 [Pipeline] { (Stop VM) 00:30:35.690 [Pipeline] sh 00:30:36.005 + vagrant halt 00:30:38.541 ==> default: Halting domain... 00:30:45.124 [Pipeline] sh 00:30:45.405 + vagrant destroy -f 00:30:47.941 ==> default: Removing domain... 00:30:48.520 [Pipeline] sh 00:30:48.859 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:30:48.868 [Pipeline] } 00:30:48.883 [Pipeline] // stage 00:30:48.887 [Pipeline] } 00:30:48.900 [Pipeline] // dir 00:30:48.905 [Pipeline] } 00:30:48.919 [Pipeline] // wrap 00:30:48.924 [Pipeline] } 00:30:48.935 [Pipeline] // catchError 00:30:48.943 [Pipeline] stage 00:30:48.946 [Pipeline] { (Epilogue) 00:30:48.958 [Pipeline] sh 00:30:49.240 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:54.531 [Pipeline] catchError 00:30:54.534 [Pipeline] { 00:30:54.547 [Pipeline] sh 00:30:54.832 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:54.832 Artifacts sizes are good 00:30:54.842 [Pipeline] } 00:30:54.856 [Pipeline] // catchError 00:30:54.867 [Pipeline] archiveArtifacts 00:30:54.875 Archiving artifacts 00:30:54.983 [Pipeline] cleanWs 00:30:54.994 [WS-CLEANUP] Deleting project workspace... 00:30:54.994 [WS-CLEANUP] Deferred wipeout is used... 00:30:55.000 [WS-CLEANUP] done 00:30:55.002 [Pipeline] } 00:30:55.015 [Pipeline] // stage 00:30:55.020 [Pipeline] } 00:30:55.032 [Pipeline] // node 00:30:55.036 [Pipeline] End of Pipeline 00:30:55.069 Finished: SUCCESS