00:00:00.000 Started by upstream project "autotest-per-patch" build number 132315 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "jbp-per-patch" build number 25755 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.064 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.068 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.120 Fetching changes from the remote Git repository 00:00:00.124 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.183 > git --version # 'git version 2.39.2' 00:00:00.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.209 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.209 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/87/24387/22 # timeout=5 00:00:04.440 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.473 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.487 Checking out Revision cfa3b8b20295fad8bbdf1ec61de6f7d828e66f18 (FETCH_HEAD) 00:00:04.487 > git config core.sparsecheckout # timeout=10 00:00:04.499 > git read-tree -mu HEAD # timeout=10 00:00:04.517 > git checkout -f cfa3b8b20295fad8bbdf1ec61de6f7d828e66f18 # timeout=5 00:00:04.536 Commit message: "jenkins/jjb-config: Add support for rebooting phy node into specific image" 00:00:04.536 > git rev-list --no-walk 6d4840695fb479ead742a39eb3a563a20cd15407 # timeout=10 00:00:04.651 [Pipeline] Start of Pipeline 00:00:04.662 [Pipeline] library 00:00:04.663 Loading library shm_lib@master 00:00:04.664 Library shm_lib@master is cached. Copying from home. 00:00:04.681 [Pipeline] node 00:00:04.689 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.691 [Pipeline] { 00:00:04.700 [Pipeline] catchError 00:00:04.702 [Pipeline] { 00:00:04.716 [Pipeline] wrap 00:00:04.725 [Pipeline] { 00:00:04.733 [Pipeline] stage 00:00:04.735 [Pipeline] { (Prologue) 00:00:04.749 [Pipeline] echo 00:00:04.750 Node: VM-host-SM9 00:00:04.754 [Pipeline] cleanWs 00:00:04.762 [WS-CLEANUP] Deleting project workspace... 00:00:04.762 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.767 [WS-CLEANUP] done 00:00:04.945 [Pipeline] setCustomBuildProperty 00:00:05.034 [Pipeline] httpRequest 00:00:05.402 [Pipeline] echo 00:00:05.405 Sorcerer 10.211.164.20 is alive 00:00:05.416 [Pipeline] retry 00:00:05.417 [Pipeline] { 00:00:05.429 [Pipeline] httpRequest 00:00:05.433 HttpMethod: GET 00:00:05.433 URL: http://10.211.164.20/packages/jbp_cfa3b8b20295fad8bbdf1ec61de6f7d828e66f18.tar.gz 00:00:05.434 Sending request to url: http://10.211.164.20/packages/jbp_cfa3b8b20295fad8bbdf1ec61de6f7d828e66f18.tar.gz 00:00:05.439 Response Code: HTTP/1.1 200 OK 00:00:05.439 Success: Status code 200 is in the accepted range: 200,404 00:00:05.440 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_cfa3b8b20295fad8bbdf1ec61de6f7d828e66f18.tar.gz 00:00:18.553 [Pipeline] } 00:00:18.570 [Pipeline] // retry 00:00:18.578 [Pipeline] sh 00:00:18.859 + tar --no-same-owner -xf jbp_cfa3b8b20295fad8bbdf1ec61de6f7d828e66f18.tar.gz 00:00:18.874 [Pipeline] httpRequest 00:00:19.536 [Pipeline] echo 00:00:19.537 Sorcerer 10.211.164.20 is alive 00:00:19.546 [Pipeline] retry 00:00:19.548 [Pipeline] { 00:00:19.563 [Pipeline] httpRequest 00:00:19.567 HttpMethod: GET 00:00:19.567 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:19.568 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:19.576 Response Code: HTTP/1.1 200 OK 00:00:19.576 Success: Status code 200 is in the accepted range: 200,404 00:00:19.577 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:02:28.358 [Pipeline] } 00:02:28.376 [Pipeline] // retry 00:02:28.384 [Pipeline] sh 00:02:28.663 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:02:31.962 [Pipeline] sh 00:02:32.243 + git -C spdk log --oneline -n5 00:02:32.243 d47eb51c9 bdev: fix a race between reset start and complete 00:02:32.243 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:02:32.243 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:02:32.243 4bcab9fb9 correct kick for CQ full case 00:02:32.243 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:02:32.263 [Pipeline] writeFile 00:02:32.280 [Pipeline] sh 00:02:32.565 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:32.578 [Pipeline] sh 00:02:32.983 + cat autorun-spdk.conf 00:02:32.983 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:32.983 SPDK_TEST_NVME=1 00:02:32.983 SPDK_TEST_FTL=1 00:02:32.983 SPDK_TEST_ISAL=1 00:02:32.983 SPDK_RUN_ASAN=1 00:02:32.983 SPDK_RUN_UBSAN=1 00:02:32.983 SPDK_TEST_XNVME=1 00:02:32.983 SPDK_TEST_NVME_FDP=1 00:02:32.983 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:32.991 RUN_NIGHTLY=0 00:02:32.994 [Pipeline] } 00:02:33.009 [Pipeline] // stage 00:02:33.024 [Pipeline] stage 00:02:33.027 [Pipeline] { (Run VM) 00:02:33.038 [Pipeline] sh 00:02:33.315 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:33.315 + echo 'Start stage prepare_nvme.sh' 00:02:33.315 Start stage prepare_nvme.sh 00:02:33.315 + [[ -n 5 ]] 00:02:33.315 + disk_prefix=ex5 00:02:33.315 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:33.315 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:33.315 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:33.315 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:33.315 ++ SPDK_TEST_NVME=1 00:02:33.315 ++ SPDK_TEST_FTL=1 00:02:33.315 ++ SPDK_TEST_ISAL=1 00:02:33.315 ++ SPDK_RUN_ASAN=1 00:02:33.315 ++ SPDK_RUN_UBSAN=1 00:02:33.315 ++ SPDK_TEST_XNVME=1 00:02:33.315 ++ SPDK_TEST_NVME_FDP=1 00:02:33.315 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:33.315 ++ RUN_NIGHTLY=0 00:02:33.315 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:33.315 + nvme_files=() 00:02:33.315 + declare -A nvme_files 00:02:33.315 + backend_dir=/var/lib/libvirt/images/backends 00:02:33.315 + nvme_files['nvme.img']=5G 00:02:33.315 + nvme_files['nvme-cmb.img']=5G 00:02:33.315 + nvme_files['nvme-multi0.img']=4G 00:02:33.315 + nvme_files['nvme-multi1.img']=4G 00:02:33.315 + nvme_files['nvme-multi2.img']=4G 00:02:33.315 + nvme_files['nvme-openstack.img']=8G 00:02:33.315 + nvme_files['nvme-zns.img']=5G 00:02:33.315 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:33.315 + (( SPDK_TEST_FTL == 1 )) 00:02:33.315 + nvme_files["nvme-ftl.img"]=6G 00:02:33.315 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:33.315 + nvme_files["nvme-fdp.img"]=1G 00:02:33.315 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:33.315 + for nvme in "${!nvme_files[@]}" 00:02:33.315 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:02:33.315 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:33.315 + for nvme in "${!nvme_files[@]}" 00:02:33.315 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:02:33.315 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:33.315 + for nvme in "${!nvme_files[@]}" 00:02:33.315 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:02:33.315 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:33.315 + for nvme in "${!nvme_files[@]}" 00:02:33.315 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:02:33.573 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:33.573 + for nvme in "${!nvme_files[@]}" 00:02:33.573 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:02:33.573 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:33.573 + for nvme in "${!nvme_files[@]}" 00:02:33.573 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:02:33.573 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:33.573 + for nvme in "${!nvme_files[@]}" 00:02:33.573 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:02:33.573 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:33.573 + for nvme in "${!nvme_files[@]}" 00:02:33.573 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:02:33.573 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:33.573 + for nvme in "${!nvme_files[@]}" 00:02:33.573 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:02:33.830 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:33.830 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:02:33.830 + echo 'End stage prepare_nvme.sh' 00:02:33.830 End stage prepare_nvme.sh 00:02:33.840 [Pipeline] sh 00:02:34.115 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:34.116 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:34.116 00:02:34.116 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:34.116 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:34.116 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:34.116 HELP=0 00:02:34.116 DRY_RUN=0 00:02:34.116 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:02:34.116 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:34.116 NVME_AUTO_CREATE=0 00:02:34.116 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:02:34.116 NVME_CMB=,,,, 00:02:34.116 NVME_PMR=,,,, 00:02:34.116 NVME_ZNS=,,,, 00:02:34.116 NVME_MS=true,,,, 00:02:34.116 NVME_FDP=,,,on, 00:02:34.116 SPDK_VAGRANT_DISTRO=fedora39 00:02:34.116 SPDK_VAGRANT_VMCPU=10 00:02:34.116 SPDK_VAGRANT_VMRAM=12288 00:02:34.116 SPDK_VAGRANT_PROVIDER=libvirt 00:02:34.116 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:34.116 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:34.116 SPDK_OPENSTACK_NETWORK=0 00:02:34.116 VAGRANT_PACKAGE_BOX=0 00:02:34.116 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:34.116 FORCE_DISTRO=true 00:02:34.116 VAGRANT_BOX_VERSION= 00:02:34.116 EXTRA_VAGRANTFILES= 00:02:34.116 NIC_MODEL=e1000 00:02:34.116 00:02:34.116 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:34.116 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:37.397 Bringing machine 'default' up with 'libvirt' provider... 00:02:38.334 ==> default: Creating image (snapshot of base box volume). 00:02:38.592 ==> default: Creating domain with the following settings... 00:02:38.592 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732004477_0d369493222bfdabe683 00:02:38.592 ==> default: -- Domain type: kvm 00:02:38.592 ==> default: -- Cpus: 10 00:02:38.592 ==> default: -- Feature: acpi 00:02:38.592 ==> default: -- Feature: apic 00:02:38.592 ==> default: -- Feature: pae 00:02:38.592 ==> default: -- Memory: 12288M 00:02:38.592 ==> default: -- Memory Backing: hugepages: 00:02:38.592 ==> default: -- Management MAC: 00:02:38.592 ==> default: -- Loader: 00:02:38.592 ==> default: -- Nvram: 00:02:38.592 ==> default: -- Base box: spdk/fedora39 00:02:38.592 ==> default: -- Storage pool: default 00:02:38.592 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732004477_0d369493222bfdabe683.img (20G) 00:02:38.592 ==> default: -- Volume Cache: default 00:02:38.592 ==> default: -- Kernel: 00:02:38.592 ==> default: -- Initrd: 00:02:38.592 ==> default: -- Graphics Type: vnc 00:02:38.592 ==> default: -- Graphics Port: -1 00:02:38.592 ==> default: -- Graphics IP: 127.0.0.1 00:02:38.592 ==> default: -- Graphics Password: Not defined 00:02:38.592 ==> default: -- Video Type: cirrus 00:02:38.592 ==> default: -- Video VRAM: 9216 00:02:38.592 ==> default: -- Sound Type: 00:02:38.592 ==> default: -- Keymap: en-us 00:02:38.592 ==> default: -- TPM Path: 00:02:38.592 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:38.592 ==> default: -- Command line args: 00:02:38.592 ==> default: -> value=-device, 00:02:38.592 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:38.592 ==> default: -> value=-drive, 00:02:38.592 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:38.592 ==> default: -> value=-device, 00:02:38.592 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:38.592 ==> default: -> value=-device, 00:02:38.592 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:38.592 ==> default: -> value=-drive, 00:02:38.592 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:02:38.592 ==> default: -> value=-device, 00:02:38.592 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:38.592 ==> default: -> value=-device, 00:02:38.592 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:38.592 ==> default: -> value=-drive, 00:02:38.592 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:38.592 ==> default: -> value=-device, 00:02:38.592 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:38.592 ==> default: -> value=-drive, 00:02:38.592 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:38.592 ==> default: -> value=-device, 00:02:38.592 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:38.592 ==> default: -> value=-drive, 00:02:38.592 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:38.592 ==> default: -> value=-device, 00:02:38.592 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:38.592 ==> default: -> value=-device, 00:02:38.592 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:38.592 ==> default: -> value=-device, 00:02:38.593 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:38.593 ==> default: -> value=-drive, 00:02:38.593 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:38.593 ==> default: -> value=-device, 00:02:38.593 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:38.593 ==> default: Creating shared folders metadata... 00:02:38.593 ==> default: Starting domain. 00:02:39.970 ==> default: Waiting for domain to get an IP address... 00:02:58.090 ==> default: Waiting for SSH to become available... 00:02:59.465 ==> default: Configuring and enabling network interfaces... 00:03:03.647 default: SSH address: 192.168.121.15:22 00:03:03.647 default: SSH username: vagrant 00:03:03.647 default: SSH auth method: private key 00:03:05.574 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:13.684 ==> default: Mounting SSHFS shared folder... 00:03:15.057 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:15.057 ==> default: Checking Mount.. 00:03:16.431 ==> default: Folder Successfully Mounted! 00:03:16.431 ==> default: Running provisioner: file... 00:03:16.996 default: ~/.gitconfig => .gitconfig 00:03:17.562 00:03:17.562 SUCCESS! 00:03:17.562 00:03:17.562 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:17.562 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:17.562 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:17.562 00:03:17.570 [Pipeline] } 00:03:17.584 [Pipeline] // stage 00:03:17.592 [Pipeline] dir 00:03:17.593 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:03:17.594 [Pipeline] { 00:03:17.607 [Pipeline] catchError 00:03:17.608 [Pipeline] { 00:03:17.621 [Pipeline] sh 00:03:17.898 + vagrant ssh-config --host vagrant 00:03:17.898 + sed -ne /^Host/,$p 00:03:17.898 + tee ssh_conf 00:03:22.081 Host vagrant 00:03:22.081 HostName 192.168.121.15 00:03:22.081 User vagrant 00:03:22.081 Port 22 00:03:22.081 UserKnownHostsFile /dev/null 00:03:22.081 StrictHostKeyChecking no 00:03:22.081 PasswordAuthentication no 00:03:22.081 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:22.081 IdentitiesOnly yes 00:03:22.081 LogLevel FATAL 00:03:22.081 ForwardAgent yes 00:03:22.081 ForwardX11 yes 00:03:22.081 00:03:22.092 [Pipeline] withEnv 00:03:22.097 [Pipeline] { 00:03:22.115 [Pipeline] sh 00:03:22.395 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:22.395 source /etc/os-release 00:03:22.395 [[ -e /image.version ]] && img=$(< /image.version) 00:03:22.395 # Minimal, systemd-like check. 00:03:22.395 if [[ -e /.dockerenv ]]; then 00:03:22.395 # Clear garbage from the node's name: 00:03:22.395 # agt-er_autotest_547-896 -> autotest_547-896 00:03:22.395 # $HOSTNAME is the actual container id 00:03:22.395 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:22.395 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:22.395 # We can assume this is a mount from a host where container is running, 00:03:22.395 # so fetch its hostname to easily identify the target swarm worker. 00:03:22.395 container="$(< /etc/hostname) ($agent)" 00:03:22.395 else 00:03:22.395 # Fallback 00:03:22.395 container=$agent 00:03:22.395 fi 00:03:22.395 fi 00:03:22.395 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:22.395 00:03:22.405 [Pipeline] } 00:03:22.421 [Pipeline] // withEnv 00:03:22.429 [Pipeline] setCustomBuildProperty 00:03:22.443 [Pipeline] stage 00:03:22.445 [Pipeline] { (Tests) 00:03:22.461 [Pipeline] sh 00:03:22.739 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:22.752 [Pipeline] sh 00:03:23.030 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:23.043 [Pipeline] timeout 00:03:23.044 Timeout set to expire in 50 min 00:03:23.045 [Pipeline] { 00:03:23.056 [Pipeline] sh 00:03:23.333 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:23.899 HEAD is now at d47eb51c9 bdev: fix a race between reset start and complete 00:03:23.908 [Pipeline] sh 00:03:24.180 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:24.453 [Pipeline] sh 00:03:24.784 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:25.058 [Pipeline] sh 00:03:25.337 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:03:25.596 ++ readlink -f spdk_repo 00:03:25.596 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:25.596 + [[ -n /home/vagrant/spdk_repo ]] 00:03:25.596 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:25.596 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:25.596 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:25.596 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:25.596 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:25.596 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:25.596 + cd /home/vagrant/spdk_repo 00:03:25.596 + source /etc/os-release 00:03:25.596 ++ NAME='Fedora Linux' 00:03:25.596 ++ VERSION='39 (Cloud Edition)' 00:03:25.596 ++ ID=fedora 00:03:25.596 ++ VERSION_ID=39 00:03:25.596 ++ VERSION_CODENAME= 00:03:25.596 ++ PLATFORM_ID=platform:f39 00:03:25.596 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:25.596 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:25.596 ++ LOGO=fedora-logo-icon 00:03:25.596 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:25.596 ++ HOME_URL=https://fedoraproject.org/ 00:03:25.596 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:25.596 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:25.596 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:25.596 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:25.596 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:25.596 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:25.596 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:25.596 ++ SUPPORT_END=2024-11-12 00:03:25.596 ++ VARIANT='Cloud Edition' 00:03:25.596 ++ VARIANT_ID=cloud 00:03:25.596 + uname -a 00:03:25.596 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:25.596 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:25.854 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:26.112 Hugepages 00:03:26.112 node hugesize free / total 00:03:26.112 node0 1048576kB 0 / 0 00:03:26.112 node0 2048kB 0 / 0 00:03:26.112 00:03:26.112 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:26.112 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:26.112 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:26.112 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:26.371 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:26.371 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:26.371 + rm -f /tmp/spdk-ld-path 00:03:26.371 + source autorun-spdk.conf 00:03:26.371 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:26.371 ++ SPDK_TEST_NVME=1 00:03:26.371 ++ SPDK_TEST_FTL=1 00:03:26.371 ++ SPDK_TEST_ISAL=1 00:03:26.371 ++ SPDK_RUN_ASAN=1 00:03:26.371 ++ SPDK_RUN_UBSAN=1 00:03:26.371 ++ SPDK_TEST_XNVME=1 00:03:26.371 ++ SPDK_TEST_NVME_FDP=1 00:03:26.371 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:26.371 ++ RUN_NIGHTLY=0 00:03:26.371 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:26.371 + [[ -n '' ]] 00:03:26.371 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:26.371 + for M in /var/spdk/build-*-manifest.txt 00:03:26.371 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:26.371 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:26.371 + for M in /var/spdk/build-*-manifest.txt 00:03:26.371 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:26.371 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:26.371 + for M in /var/spdk/build-*-manifest.txt 00:03:26.371 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:26.371 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:26.371 ++ uname 00:03:26.371 + [[ Linux == \L\i\n\u\x ]] 00:03:26.371 + sudo dmesg -T 00:03:26.371 + sudo dmesg --clear 00:03:26.371 + dmesg_pid=5289 00:03:26.371 + [[ Fedora Linux == FreeBSD ]] 00:03:26.371 + sudo dmesg -Tw 00:03:26.371 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:26.371 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:26.371 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:26.371 + [[ -x /usr/src/fio-static/fio ]] 00:03:26.371 + export FIO_BIN=/usr/src/fio-static/fio 00:03:26.372 + FIO_BIN=/usr/src/fio-static/fio 00:03:26.372 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:26.372 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:26.372 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:26.372 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:26.372 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:26.372 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:26.372 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:26.372 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:26.372 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:26.372 08:22:05 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:26.372 08:22:05 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:26.372 08:22:05 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:26.372 08:22:05 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:26.372 08:22:05 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:26.372 08:22:05 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:26.372 08:22:05 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:26.372 08:22:05 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:26.372 08:22:05 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:26.372 08:22:05 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:26.372 08:22:05 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:26.372 08:22:05 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:03:26.372 08:22:05 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:26.372 08:22:05 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:26.630 08:22:05 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:26.630 08:22:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:26.630 08:22:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:26.630 08:22:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:26.630 08:22:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:26.630 08:22:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:26.630 08:22:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.630 08:22:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.630 08:22:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.630 08:22:05 -- paths/export.sh@5 -- $ export PATH 00:03:26.630 08:22:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.630 08:22:05 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:26.630 08:22:05 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:26.630 08:22:05 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732004525.XXXXXX 00:03:26.630 08:22:05 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732004525.Kex16p 00:03:26.630 08:22:05 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:26.630 08:22:05 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:26.630 08:22:05 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:26.630 08:22:05 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:26.630 08:22:05 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:26.630 08:22:05 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:26.630 08:22:05 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:26.630 08:22:05 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.630 08:22:05 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:03:26.630 08:22:05 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:26.630 08:22:05 -- pm/common@17 -- $ local monitor 00:03:26.630 08:22:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.630 08:22:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.630 08:22:05 -- pm/common@25 -- $ sleep 1 00:03:26.630 08:22:05 -- pm/common@21 -- $ date +%s 00:03:26.630 08:22:05 -- pm/common@21 -- $ date +%s 00:03:26.630 08:22:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732004525 00:03:26.630 08:22:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732004525 00:03:26.630 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732004525_collect-cpu-load.pm.log 00:03:26.630 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732004525_collect-vmstat.pm.log 00:03:27.564 08:22:06 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:27.565 08:22:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:27.565 08:22:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:27.565 08:22:06 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:27.565 08:22:06 -- spdk/autobuild.sh@16 -- $ date -u 00:03:27.565 Tue Nov 19 08:22:06 AM UTC 2024 00:03:27.565 08:22:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:27.565 v25.01-pre-190-gd47eb51c9 00:03:27.565 08:22:06 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:27.565 08:22:06 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:27.565 08:22:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:27.565 08:22:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:27.565 08:22:06 -- common/autotest_common.sh@10 -- $ set +x 00:03:27.565 ************************************ 00:03:27.565 START TEST asan 00:03:27.565 ************************************ 00:03:27.565 using asan 00:03:27.565 08:22:06 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:27.565 00:03:27.565 real 0m0.000s 00:03:27.565 user 0m0.000s 00:03:27.565 sys 0m0.000s 00:03:27.565 08:22:06 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:27.565 ************************************ 00:03:27.565 END TEST asan 00:03:27.565 08:22:06 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:27.565 ************************************ 00:03:27.565 08:22:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:27.565 08:22:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:27.565 08:22:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:27.565 08:22:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:27.565 08:22:06 -- common/autotest_common.sh@10 -- $ set +x 00:03:27.565 ************************************ 00:03:27.565 START TEST ubsan 00:03:27.565 ************************************ 00:03:27.565 using ubsan 00:03:27.565 08:22:06 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:27.565 00:03:27.565 real 0m0.000s 00:03:27.565 user 0m0.000s 00:03:27.565 sys 0m0.000s 00:03:27.565 08:22:06 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:27.565 08:22:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:27.565 ************************************ 00:03:27.565 END TEST ubsan 00:03:27.565 ************************************ 00:03:27.565 08:22:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:27.565 08:22:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:27.565 08:22:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:27.565 08:22:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:27.565 08:22:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:27.565 08:22:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:27.565 08:22:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:27.565 08:22:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:27.565 08:22:06 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:03:27.822 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:27.822 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:28.080 Using 'verbs' RDMA provider 00:03:41.214 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:53.472 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:53.731 Creating mk/config.mk...done. 00:03:53.731 Creating mk/cc.flags.mk...done. 00:03:53.731 Type 'make' to build. 00:03:53.731 08:22:32 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:53.731 08:22:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:53.731 08:22:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:53.731 08:22:32 -- common/autotest_common.sh@10 -- $ set +x 00:03:53.731 ************************************ 00:03:53.731 START TEST make 00:03:53.731 ************************************ 00:03:53.731 08:22:32 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:53.989 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:53.989 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:53.989 meson setup builddir \ 00:03:53.989 -Dwith-libaio=enabled \ 00:03:53.989 -Dwith-liburing=enabled \ 00:03:53.989 -Dwith-libvfn=disabled \ 00:03:53.989 -Dwith-spdk=disabled \ 00:03:53.989 -Dexamples=false \ 00:03:53.989 -Dtests=false \ 00:03:53.989 -Dtools=false && \ 00:03:53.989 meson compile -C builddir && \ 00:03:53.989 cd -) 00:03:53.989 make[1]: Nothing to be done for 'all'. 00:03:58.172 The Meson build system 00:03:58.172 Version: 1.5.0 00:03:58.172 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:58.172 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:58.172 Build type: native build 00:03:58.172 Project name: xnvme 00:03:58.172 Project version: 0.7.5 00:03:58.172 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:58.172 C linker for the host machine: cc ld.bfd 2.40-14 00:03:58.172 Host machine cpu family: x86_64 00:03:58.172 Host machine cpu: x86_64 00:03:58.172 Message: host_machine.system: linux 00:03:58.172 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:58.172 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:58.172 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:58.172 Run-time dependency threads found: YES 00:03:58.172 Has header "setupapi.h" : NO 00:03:58.172 Has header "linux/blkzoned.h" : YES 00:03:58.172 Has header "linux/blkzoned.h" : YES (cached) 00:03:58.172 Has header "libaio.h" : YES 00:03:58.172 Library aio found: YES 00:03:58.172 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:58.172 Run-time dependency liburing found: YES 2.2 00:03:58.172 Dependency libvfn skipped: feature with-libvfn disabled 00:03:58.172 Found CMake: /usr/bin/cmake (3.27.7) 00:03:58.172 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:58.172 Subproject spdk : skipped: feature with-spdk disabled 00:03:58.172 Run-time dependency appleframeworks found: NO (tried framework) 00:03:58.172 Run-time dependency appleframeworks found: NO (tried framework) 00:03:58.172 Library rt found: YES 00:03:58.172 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:58.172 Configuring xnvme_config.h using configuration 00:03:58.172 Configuring xnvme.spec using configuration 00:03:58.172 Run-time dependency bash-completion found: YES 2.11 00:03:58.172 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:58.172 Program cp found: YES (/usr/bin/cp) 00:03:58.172 Build targets in project: 3 00:03:58.172 00:03:58.172 xnvme 0.7.5 00:03:58.172 00:03:58.172 Subprojects 00:03:58.172 spdk : NO Feature 'with-spdk' disabled 00:03:58.172 00:03:58.172 User defined options 00:03:58.172 examples : false 00:03:58.172 tests : false 00:03:58.172 tools : false 00:03:58.172 with-libaio : enabled 00:03:58.172 with-liburing: enabled 00:03:58.172 with-libvfn : disabled 00:03:58.172 with-spdk : disabled 00:03:58.172 00:03:58.172 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:58.430 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:58.430 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:58.689 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:58.689 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:58.689 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:58.689 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:58.689 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:58.689 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:58.689 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:58.689 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:58.689 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:58.689 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:58.689 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:58.689 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:58.947 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:58.947 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:58.947 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:58.947 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:58.947 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:58.947 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:58.947 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:58.947 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:59.206 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:59.206 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:59.206 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:59.206 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:59.206 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:59.206 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:59.206 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:59.206 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:59.206 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:59.206 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:59.206 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:59.206 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:59.206 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:59.206 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:59.206 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:59.206 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:59.206 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:59.206 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:59.206 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:59.206 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:59.206 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:59.206 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:59.465 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:59.465 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:59.465 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:59.465 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:59.465 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:59.465 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:59.465 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:59.465 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:59.465 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:59.465 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:59.465 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:59.465 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:59.465 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:59.465 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:59.724 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:59.724 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:59.724 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:59.724 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:59.724 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:59.724 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:59.724 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:59.724 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:59.982 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:59.982 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:59.982 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:59.982 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:59.982 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:59.982 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:59.982 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:59.982 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:04:00.549 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:04:00.549 [75/76] Linking static target lib/libxnvme.a 00:04:00.549 [76/76] Linking target lib/libxnvme.so.0.7.5 00:04:00.549 INFO: autodetecting backend as ninja 00:04:00.549 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:00.549 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:12.748 The Meson build system 00:04:12.748 Version: 1.5.0 00:04:12.748 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:12.748 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:12.748 Build type: native build 00:04:12.748 Program cat found: YES (/usr/bin/cat) 00:04:12.748 Project name: DPDK 00:04:12.748 Project version: 24.03.0 00:04:12.748 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:12.748 C linker for the host machine: cc ld.bfd 2.40-14 00:04:12.748 Host machine cpu family: x86_64 00:04:12.748 Host machine cpu: x86_64 00:04:12.748 Message: ## Building in Developer Mode ## 00:04:12.748 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:12.748 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:12.748 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:12.748 Program python3 found: YES (/usr/bin/python3) 00:04:12.748 Program cat found: YES (/usr/bin/cat) 00:04:12.748 Compiler for C supports arguments -march=native: YES 00:04:12.748 Checking for size of "void *" : 8 00:04:12.748 Checking for size of "void *" : 8 (cached) 00:04:12.748 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:12.748 Library m found: YES 00:04:12.748 Library numa found: YES 00:04:12.748 Has header "numaif.h" : YES 00:04:12.748 Library fdt found: NO 00:04:12.748 Library execinfo found: NO 00:04:12.748 Has header "execinfo.h" : YES 00:04:12.748 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:12.748 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:12.748 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:12.748 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:12.748 Run-time dependency openssl found: YES 3.1.1 00:04:12.748 Run-time dependency libpcap found: YES 1.10.4 00:04:12.748 Has header "pcap.h" with dependency libpcap: YES 00:04:12.748 Compiler for C supports arguments -Wcast-qual: YES 00:04:12.748 Compiler for C supports arguments -Wdeprecated: YES 00:04:12.748 Compiler for C supports arguments -Wformat: YES 00:04:12.748 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:12.748 Compiler for C supports arguments -Wformat-security: NO 00:04:12.748 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:12.748 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:12.748 Compiler for C supports arguments -Wnested-externs: YES 00:04:12.748 Compiler for C supports arguments -Wold-style-definition: YES 00:04:12.748 Compiler for C supports arguments -Wpointer-arith: YES 00:04:12.748 Compiler for C supports arguments -Wsign-compare: YES 00:04:12.748 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:12.748 Compiler for C supports arguments -Wundef: YES 00:04:12.748 Compiler for C supports arguments -Wwrite-strings: YES 00:04:12.748 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:12.748 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:12.748 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:12.748 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:12.748 Program objdump found: YES (/usr/bin/objdump) 00:04:12.748 Compiler for C supports arguments -mavx512f: YES 00:04:12.748 Checking if "AVX512 checking" compiles: YES 00:04:12.748 Fetching value of define "__SSE4_2__" : 1 00:04:12.748 Fetching value of define "__AES__" : 1 00:04:12.748 Fetching value of define "__AVX__" : 1 00:04:12.748 Fetching value of define "__AVX2__" : 1 00:04:12.748 Fetching value of define "__AVX512BW__" : (undefined) 00:04:12.748 Fetching value of define "__AVX512CD__" : (undefined) 00:04:12.748 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:12.748 Fetching value of define "__AVX512F__" : (undefined) 00:04:12.748 Fetching value of define "__AVX512VL__" : (undefined) 00:04:12.748 Fetching value of define "__PCLMUL__" : 1 00:04:12.748 Fetching value of define "__RDRND__" : 1 00:04:12.748 Fetching value of define "__RDSEED__" : 1 00:04:12.748 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:12.748 Fetching value of define "__znver1__" : (undefined) 00:04:12.748 Fetching value of define "__znver2__" : (undefined) 00:04:12.748 Fetching value of define "__znver3__" : (undefined) 00:04:12.748 Fetching value of define "__znver4__" : (undefined) 00:04:12.748 Library asan found: YES 00:04:12.748 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:12.748 Message: lib/log: Defining dependency "log" 00:04:12.748 Message: lib/kvargs: Defining dependency "kvargs" 00:04:12.748 Message: lib/telemetry: Defining dependency "telemetry" 00:04:12.748 Library rt found: YES 00:04:12.748 Checking for function "getentropy" : NO 00:04:12.748 Message: lib/eal: Defining dependency "eal" 00:04:12.748 Message: lib/ring: Defining dependency "ring" 00:04:12.748 Message: lib/rcu: Defining dependency "rcu" 00:04:12.748 Message: lib/mempool: Defining dependency "mempool" 00:04:12.748 Message: lib/mbuf: Defining dependency "mbuf" 00:04:12.748 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:12.748 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:12.748 Compiler for C supports arguments -mpclmul: YES 00:04:12.748 Compiler for C supports arguments -maes: YES 00:04:12.748 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:12.748 Compiler for C supports arguments -mavx512bw: YES 00:04:12.748 Compiler for C supports arguments -mavx512dq: YES 00:04:12.748 Compiler for C supports arguments -mavx512vl: YES 00:04:12.748 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:12.748 Compiler for C supports arguments -mavx2: YES 00:04:12.749 Compiler for C supports arguments -mavx: YES 00:04:12.749 Message: lib/net: Defining dependency "net" 00:04:12.749 Message: lib/meter: Defining dependency "meter" 00:04:12.749 Message: lib/ethdev: Defining dependency "ethdev" 00:04:12.749 Message: lib/pci: Defining dependency "pci" 00:04:12.749 Message: lib/cmdline: Defining dependency "cmdline" 00:04:12.749 Message: lib/hash: Defining dependency "hash" 00:04:12.749 Message: lib/timer: Defining dependency "timer" 00:04:12.749 Message: lib/compressdev: Defining dependency "compressdev" 00:04:12.749 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:12.749 Message: lib/dmadev: Defining dependency "dmadev" 00:04:12.749 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:12.749 Message: lib/power: Defining dependency "power" 00:04:12.749 Message: lib/reorder: Defining dependency "reorder" 00:04:12.749 Message: lib/security: Defining dependency "security" 00:04:12.749 Has header "linux/userfaultfd.h" : YES 00:04:12.749 Has header "linux/vduse.h" : YES 00:04:12.749 Message: lib/vhost: Defining dependency "vhost" 00:04:12.749 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:12.749 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:12.749 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:12.749 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:12.749 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:12.749 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:12.749 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:12.749 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:12.749 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:12.749 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:12.749 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:12.749 Configuring doxy-api-html.conf using configuration 00:04:12.749 Configuring doxy-api-man.conf using configuration 00:04:12.749 Program mandb found: YES (/usr/bin/mandb) 00:04:12.749 Program sphinx-build found: NO 00:04:12.749 Configuring rte_build_config.h using configuration 00:04:12.749 Message: 00:04:12.749 ================= 00:04:12.749 Applications Enabled 00:04:12.749 ================= 00:04:12.749 00:04:12.749 apps: 00:04:12.749 00:04:12.749 00:04:12.749 Message: 00:04:12.749 ================= 00:04:12.749 Libraries Enabled 00:04:12.749 ================= 00:04:12.749 00:04:12.749 libs: 00:04:12.749 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:12.749 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:12.749 cryptodev, dmadev, power, reorder, security, vhost, 00:04:12.749 00:04:12.749 Message: 00:04:12.749 =============== 00:04:12.749 Drivers Enabled 00:04:12.749 =============== 00:04:12.749 00:04:12.749 common: 00:04:12.749 00:04:12.749 bus: 00:04:12.749 pci, vdev, 00:04:12.749 mempool: 00:04:12.749 ring, 00:04:12.749 dma: 00:04:12.749 00:04:12.749 net: 00:04:12.749 00:04:12.749 crypto: 00:04:12.749 00:04:12.749 compress: 00:04:12.749 00:04:12.749 vdpa: 00:04:12.749 00:04:12.749 00:04:12.749 Message: 00:04:12.749 ================= 00:04:12.749 Content Skipped 00:04:12.749 ================= 00:04:12.749 00:04:12.749 apps: 00:04:12.749 dumpcap: explicitly disabled via build config 00:04:12.749 graph: explicitly disabled via build config 00:04:12.749 pdump: explicitly disabled via build config 00:04:12.749 proc-info: explicitly disabled via build config 00:04:12.749 test-acl: explicitly disabled via build config 00:04:12.749 test-bbdev: explicitly disabled via build config 00:04:12.749 test-cmdline: explicitly disabled via build config 00:04:12.749 test-compress-perf: explicitly disabled via build config 00:04:12.749 test-crypto-perf: explicitly disabled via build config 00:04:12.749 test-dma-perf: explicitly disabled via build config 00:04:12.749 test-eventdev: explicitly disabled via build config 00:04:12.749 test-fib: explicitly disabled via build config 00:04:12.749 test-flow-perf: explicitly disabled via build config 00:04:12.749 test-gpudev: explicitly disabled via build config 00:04:12.749 test-mldev: explicitly disabled via build config 00:04:12.749 test-pipeline: explicitly disabled via build config 00:04:12.749 test-pmd: explicitly disabled via build config 00:04:12.749 test-regex: explicitly disabled via build config 00:04:12.749 test-sad: explicitly disabled via build config 00:04:12.749 test-security-perf: explicitly disabled via build config 00:04:12.749 00:04:12.749 libs: 00:04:12.749 argparse: explicitly disabled via build config 00:04:12.749 metrics: explicitly disabled via build config 00:04:12.749 acl: explicitly disabled via build config 00:04:12.749 bbdev: explicitly disabled via build config 00:04:12.749 bitratestats: explicitly disabled via build config 00:04:12.749 bpf: explicitly disabled via build config 00:04:12.749 cfgfile: explicitly disabled via build config 00:04:12.749 distributor: explicitly disabled via build config 00:04:12.749 efd: explicitly disabled via build config 00:04:12.749 eventdev: explicitly disabled via build config 00:04:12.749 dispatcher: explicitly disabled via build config 00:04:12.749 gpudev: explicitly disabled via build config 00:04:12.749 gro: explicitly disabled via build config 00:04:12.749 gso: explicitly disabled via build config 00:04:12.749 ip_frag: explicitly disabled via build config 00:04:12.749 jobstats: explicitly disabled via build config 00:04:12.749 latencystats: explicitly disabled via build config 00:04:12.749 lpm: explicitly disabled via build config 00:04:12.749 member: explicitly disabled via build config 00:04:12.749 pcapng: explicitly disabled via build config 00:04:12.749 rawdev: explicitly disabled via build config 00:04:12.749 regexdev: explicitly disabled via build config 00:04:12.749 mldev: explicitly disabled via build config 00:04:12.749 rib: explicitly disabled via build config 00:04:12.749 sched: explicitly disabled via build config 00:04:12.749 stack: explicitly disabled via build config 00:04:12.749 ipsec: explicitly disabled via build config 00:04:12.749 pdcp: explicitly disabled via build config 00:04:12.749 fib: explicitly disabled via build config 00:04:12.749 port: explicitly disabled via build config 00:04:12.749 pdump: explicitly disabled via build config 00:04:12.749 table: explicitly disabled via build config 00:04:12.749 pipeline: explicitly disabled via build config 00:04:12.749 graph: explicitly disabled via build config 00:04:12.749 node: explicitly disabled via build config 00:04:12.749 00:04:12.749 drivers: 00:04:12.749 common/cpt: not in enabled drivers build config 00:04:12.749 common/dpaax: not in enabled drivers build config 00:04:12.749 common/iavf: not in enabled drivers build config 00:04:12.749 common/idpf: not in enabled drivers build config 00:04:12.749 common/ionic: not in enabled drivers build config 00:04:12.749 common/mvep: not in enabled drivers build config 00:04:12.749 common/octeontx: not in enabled drivers build config 00:04:12.749 bus/auxiliary: not in enabled drivers build config 00:04:12.749 bus/cdx: not in enabled drivers build config 00:04:12.749 bus/dpaa: not in enabled drivers build config 00:04:12.749 bus/fslmc: not in enabled drivers build config 00:04:12.749 bus/ifpga: not in enabled drivers build config 00:04:12.749 bus/platform: not in enabled drivers build config 00:04:12.749 bus/uacce: not in enabled drivers build config 00:04:12.749 bus/vmbus: not in enabled drivers build config 00:04:12.749 common/cnxk: not in enabled drivers build config 00:04:12.749 common/mlx5: not in enabled drivers build config 00:04:12.749 common/nfp: not in enabled drivers build config 00:04:12.749 common/nitrox: not in enabled drivers build config 00:04:12.749 common/qat: not in enabled drivers build config 00:04:12.749 common/sfc_efx: not in enabled drivers build config 00:04:12.749 mempool/bucket: not in enabled drivers build config 00:04:12.749 mempool/cnxk: not in enabled drivers build config 00:04:12.749 mempool/dpaa: not in enabled drivers build config 00:04:12.749 mempool/dpaa2: not in enabled drivers build config 00:04:12.749 mempool/octeontx: not in enabled drivers build config 00:04:12.749 mempool/stack: not in enabled drivers build config 00:04:12.749 dma/cnxk: not in enabled drivers build config 00:04:12.749 dma/dpaa: not in enabled drivers build config 00:04:12.749 dma/dpaa2: not in enabled drivers build config 00:04:12.749 dma/hisilicon: not in enabled drivers build config 00:04:12.749 dma/idxd: not in enabled drivers build config 00:04:12.749 dma/ioat: not in enabled drivers build config 00:04:12.749 dma/skeleton: not in enabled drivers build config 00:04:12.749 net/af_packet: not in enabled drivers build config 00:04:12.749 net/af_xdp: not in enabled drivers build config 00:04:12.749 net/ark: not in enabled drivers build config 00:04:12.749 net/atlantic: not in enabled drivers build config 00:04:12.749 net/avp: not in enabled drivers build config 00:04:12.749 net/axgbe: not in enabled drivers build config 00:04:12.749 net/bnx2x: not in enabled drivers build config 00:04:12.749 net/bnxt: not in enabled drivers build config 00:04:12.749 net/bonding: not in enabled drivers build config 00:04:12.749 net/cnxk: not in enabled drivers build config 00:04:12.749 net/cpfl: not in enabled drivers build config 00:04:12.749 net/cxgbe: not in enabled drivers build config 00:04:12.749 net/dpaa: not in enabled drivers build config 00:04:12.749 net/dpaa2: not in enabled drivers build config 00:04:12.749 net/e1000: not in enabled drivers build config 00:04:12.749 net/ena: not in enabled drivers build config 00:04:12.749 net/enetc: not in enabled drivers build config 00:04:12.749 net/enetfec: not in enabled drivers build config 00:04:12.749 net/enic: not in enabled drivers build config 00:04:12.749 net/failsafe: not in enabled drivers build config 00:04:12.749 net/fm10k: not in enabled drivers build config 00:04:12.749 net/gve: not in enabled drivers build config 00:04:12.749 net/hinic: not in enabled drivers build config 00:04:12.749 net/hns3: not in enabled drivers build config 00:04:12.749 net/i40e: not in enabled drivers build config 00:04:12.749 net/iavf: not in enabled drivers build config 00:04:12.749 net/ice: not in enabled drivers build config 00:04:12.749 net/idpf: not in enabled drivers build config 00:04:12.749 net/igc: not in enabled drivers build config 00:04:12.749 net/ionic: not in enabled drivers build config 00:04:12.749 net/ipn3ke: not in enabled drivers build config 00:04:12.749 net/ixgbe: not in enabled drivers build config 00:04:12.749 net/mana: not in enabled drivers build config 00:04:12.749 net/memif: not in enabled drivers build config 00:04:12.749 net/mlx4: not in enabled drivers build config 00:04:12.749 net/mlx5: not in enabled drivers build config 00:04:12.749 net/mvneta: not in enabled drivers build config 00:04:12.749 net/mvpp2: not in enabled drivers build config 00:04:12.749 net/netvsc: not in enabled drivers build config 00:04:12.749 net/nfb: not in enabled drivers build config 00:04:12.749 net/nfp: not in enabled drivers build config 00:04:12.749 net/ngbe: not in enabled drivers build config 00:04:12.749 net/null: not in enabled drivers build config 00:04:12.749 net/octeontx: not in enabled drivers build config 00:04:12.749 net/octeon_ep: not in enabled drivers build config 00:04:12.749 net/pcap: not in enabled drivers build config 00:04:12.749 net/pfe: not in enabled drivers build config 00:04:12.749 net/qede: not in enabled drivers build config 00:04:12.749 net/ring: not in enabled drivers build config 00:04:12.749 net/sfc: not in enabled drivers build config 00:04:12.749 net/softnic: not in enabled drivers build config 00:04:12.749 net/tap: not in enabled drivers build config 00:04:12.749 net/thunderx: not in enabled drivers build config 00:04:12.749 net/txgbe: not in enabled drivers build config 00:04:12.749 net/vdev_netvsc: not in enabled drivers build config 00:04:12.749 net/vhost: not in enabled drivers build config 00:04:12.749 net/virtio: not in enabled drivers build config 00:04:12.749 net/vmxnet3: not in enabled drivers build config 00:04:12.749 raw/*: missing internal dependency, "rawdev" 00:04:12.749 crypto/armv8: not in enabled drivers build config 00:04:12.749 crypto/bcmfs: not in enabled drivers build config 00:04:12.749 crypto/caam_jr: not in enabled drivers build config 00:04:12.749 crypto/ccp: not in enabled drivers build config 00:04:12.749 crypto/cnxk: not in enabled drivers build config 00:04:12.749 crypto/dpaa_sec: not in enabled drivers build config 00:04:12.749 crypto/dpaa2_sec: not in enabled drivers build config 00:04:12.749 crypto/ipsec_mb: not in enabled drivers build config 00:04:12.749 crypto/mlx5: not in enabled drivers build config 00:04:12.749 crypto/mvsam: not in enabled drivers build config 00:04:12.749 crypto/nitrox: not in enabled drivers build config 00:04:12.749 crypto/null: not in enabled drivers build config 00:04:12.749 crypto/octeontx: not in enabled drivers build config 00:04:12.749 crypto/openssl: not in enabled drivers build config 00:04:12.749 crypto/scheduler: not in enabled drivers build config 00:04:12.749 crypto/uadk: not in enabled drivers build config 00:04:12.749 crypto/virtio: not in enabled drivers build config 00:04:12.749 compress/isal: not in enabled drivers build config 00:04:12.749 compress/mlx5: not in enabled drivers build config 00:04:12.749 compress/nitrox: not in enabled drivers build config 00:04:12.749 compress/octeontx: not in enabled drivers build config 00:04:12.749 compress/zlib: not in enabled drivers build config 00:04:12.749 regex/*: missing internal dependency, "regexdev" 00:04:12.749 ml/*: missing internal dependency, "mldev" 00:04:12.749 vdpa/ifc: not in enabled drivers build config 00:04:12.749 vdpa/mlx5: not in enabled drivers build config 00:04:12.749 vdpa/nfp: not in enabled drivers build config 00:04:12.749 vdpa/sfc: not in enabled drivers build config 00:04:12.749 event/*: missing internal dependency, "eventdev" 00:04:12.749 baseband/*: missing internal dependency, "bbdev" 00:04:12.749 gpu/*: missing internal dependency, "gpudev" 00:04:12.749 00:04:12.749 00:04:13.314 Build targets in project: 85 00:04:13.314 00:04:13.314 DPDK 24.03.0 00:04:13.314 00:04:13.314 User defined options 00:04:13.314 buildtype : debug 00:04:13.314 default_library : shared 00:04:13.314 libdir : lib 00:04:13.314 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:13.314 b_sanitize : address 00:04:13.314 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:13.314 c_link_args : 00:04:13.314 cpu_instruction_set: native 00:04:13.314 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:13.314 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:13.314 enable_docs : false 00:04:13.314 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:13.314 enable_kmods : false 00:04:13.314 max_lcores : 128 00:04:13.314 tests : false 00:04:13.314 00:04:13.314 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:13.880 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:14.138 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:14.138 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:14.138 [3/268] Linking static target lib/librte_kvargs.a 00:04:14.138 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:14.138 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:14.138 [6/268] Linking static target lib/librte_log.a 00:04:14.708 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.708 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:14.966 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:15.234 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:15.234 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:15.234 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:15.234 [13/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.491 [14/268] Linking target lib/librte_log.so.24.1 00:04:15.491 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:15.491 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:15.748 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:15.748 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:15.748 [19/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:16.005 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:16.005 [21/268] Linking target lib/librte_kvargs.so.24.1 00:04:16.005 [22/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:16.005 [23/268] Linking static target lib/librte_telemetry.a 00:04:16.005 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:16.263 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:16.520 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:16.520 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:16.777 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:16.777 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:16.777 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:17.035 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:17.035 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.035 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:17.291 [34/268] Linking target lib/librte_telemetry.so.24.1 00:04:17.549 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:17.549 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:17.549 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:17.549 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:17.549 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:17.806 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:17.806 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:17.806 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:17.806 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:18.064 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:18.064 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:18.322 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:18.579 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:18.837 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:18.837 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:18.837 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:18.837 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:19.403 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:19.403 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:19.661 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:19.661 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:19.661 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:19.920 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:19.920 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:20.496 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:20.496 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:20.496 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:20.496 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:20.496 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:20.496 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:20.768 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:21.026 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:21.026 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:21.594 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:21.594 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:21.594 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:21.594 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:22.160 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:22.160 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:22.160 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:22.160 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:22.160 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:22.160 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:22.160 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:22.726 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:22.726 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:22.726 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:22.985 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:22.985 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:22.985 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:23.243 [85/268] Linking static target lib/librte_eal.a 00:04:23.243 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:23.243 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:23.243 [88/268] Linking static target lib/librte_ring.a 00:04:23.502 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:23.502 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:23.760 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:23.760 [92/268] Linking static target lib/librte_rcu.a 00:04:23.760 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:23.760 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:23.760 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.019 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:24.278 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:24.278 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:24.278 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.278 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:24.536 [101/268] Linking static target lib/librte_mempool.a 00:04:24.536 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:24.536 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:24.795 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:24.795 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:25.053 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:25.053 [107/268] Linking static target lib/librte_net.a 00:04:25.313 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:25.313 [109/268] Linking static target lib/librte_meter.a 00:04:25.571 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:25.571 [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:25.571 [112/268] Linking static target lib/librte_mbuf.a 00:04:25.571 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:25.830 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.830 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.830 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:26.089 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.089 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:26.658 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:27.225 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:27.225 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.792 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:27.792 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:27.792 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:28.050 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:28.050 [126/268] Linking static target lib/librte_pci.a 00:04:28.309 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:28.567 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:28.567 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:28.826 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:28.826 [131/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.826 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:29.084 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:29.084 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:29.084 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:29.084 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:29.084 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:29.343 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:29.343 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:29.343 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:29.343 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:29.343 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:29.343 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:29.601 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:29.862 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:30.177 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:30.177 [147/268] Linking static target lib/librte_cmdline.a 00:04:30.445 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:30.704 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:30.704 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:30.963 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:30.963 [152/268] Linking static target lib/librte_timer.a 00:04:31.221 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:31.221 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:31.221 [155/268] Linking static target lib/librte_ethdev.a 00:04:31.479 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:31.738 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:31.995 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.253 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:32.253 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:32.253 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:32.253 [162/268] Linking static target lib/librte_compressdev.a 00:04:32.253 [163/268] Linking static target lib/librte_hash.a 00:04:32.511 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:32.770 [165/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:32.770 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:33.029 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:33.290 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:33.290 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:33.608 [170/268] Linking static target lib/librte_dmadev.a 00:04:33.608 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:33.866 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:34.125 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:34.125 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.383 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:34.383 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.951 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.209 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:35.209 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:35.209 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:35.468 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:35.468 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:35.727 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:35.727 [184/268] Linking static target lib/librte_power.a 00:04:35.985 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:35.985 [186/268] Linking static target lib/librte_cryptodev.a 00:04:36.552 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:36.552 [188/268] Linking static target lib/librte_reorder.a 00:04:36.811 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:36.811 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:37.070 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:37.070 [192/268] Linking static target lib/librte_security.a 00:04:37.329 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:37.587 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:37.848 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.107 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:38.368 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.368 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:38.937 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:39.196 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:39.454 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:39.713 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:39.713 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:39.713 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:39.972 [205/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.546 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:40.546 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:40.546 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:40.805 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:40.805 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:40.805 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:41.064 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:41.322 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:41.322 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:41.322 [215/268] Linking static target drivers/librte_bus_vdev.a 00:04:41.322 [216/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.322 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:41.322 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:41.322 [219/268] Linking static target drivers/librte_bus_pci.a 00:04:41.322 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:41.581 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:41.581 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:41.581 [223/268] Linking target lib/librte_eal.so.24.1 00:04:41.841 [224/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:41.841 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.841 [226/268] Linking target lib/librte_timer.so.24.1 00:04:41.841 [227/268] Linking target lib/librte_pci.so.24.1 00:04:41.841 [228/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:41.841 [229/268] Linking target lib/librte_ring.so.24.1 00:04:41.841 [230/268] Linking target lib/librte_meter.so.24.1 00:04:41.841 [231/268] Linking target lib/librte_dmadev.so.24.1 00:04:41.841 [232/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:42.099 [233/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:42.099 [234/268] Linking static target drivers/librte_mempool_ring.a 00:04:42.099 [235/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:42.099 [236/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:42.099 [237/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:42.358 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:42.358 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:42.358 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:42.358 [241/268] Linking target lib/librte_rcu.so.24.1 00:04:42.358 [242/268] Linking target lib/librte_mempool.so.24.1 00:04:42.616 [243/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:42.616 [244/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:42.616 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:42.616 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:42.616 [247/268] Linking target lib/librte_mbuf.so.24.1 00:04:42.616 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:42.875 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:43.133 [250/268] Linking target lib/librte_net.so.24.1 00:04:43.133 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:04:43.133 [252/268] Linking target lib/librte_compressdev.so.24.1 00:04:43.133 [253/268] Linking target lib/librte_reorder.so.24.1 00:04:43.392 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:43.392 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:43.392 [256/268] Linking target lib/librte_cmdline.so.24.1 00:04:43.392 [257/268] Linking target lib/librte_hash.so.24.1 00:04:43.392 [258/268] Linking target lib/librte_security.so.24.1 00:04:43.650 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:44.586 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:45.522 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.522 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:45.781 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:45.781 [264/268] Linking target lib/librte_power.so.24.1 00:04:53.938 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:53.938 [266/268] Linking static target lib/librte_vhost.a 00:04:57.253 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:57.253 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:57.253 INFO: autodetecting backend as ninja 00:04:57.253 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:29.326 CC lib/ut/ut.o 00:05:29.326 CC lib/log/log.o 00:05:29.326 CC lib/ut_mock/mock.o 00:05:29.326 CC lib/log/log_flags.o 00:05:29.326 CC lib/log/log_deprecated.o 00:05:29.326 LIB libspdk_log.a 00:05:29.326 LIB libspdk_ut_mock.a 00:05:29.326 LIB libspdk_ut.a 00:05:29.326 SO libspdk_ut_mock.so.6.0 00:05:29.326 SO libspdk_log.so.7.1 00:05:29.326 SO libspdk_ut.so.2.0 00:05:29.326 SYMLINK libspdk_ut.so 00:05:29.326 SYMLINK libspdk_log.so 00:05:29.326 SYMLINK libspdk_ut_mock.so 00:05:29.326 CXX lib/trace_parser/trace.o 00:05:29.326 CC lib/util/base64.o 00:05:29.326 CC lib/util/bit_array.o 00:05:29.326 CC lib/util/cpuset.o 00:05:29.326 CC lib/util/crc16.o 00:05:29.326 CC lib/dma/dma.o 00:05:29.326 CC lib/util/crc32.o 00:05:29.326 CC lib/util/crc32c.o 00:05:29.326 CC lib/ioat/ioat.o 00:05:29.326 CC lib/vfio_user/host/vfio_user_pci.o 00:05:29.326 CC lib/util/crc32_ieee.o 00:05:29.326 CC lib/util/crc64.o 00:05:29.326 CC lib/util/dif.o 00:05:29.326 CC lib/util/fd.o 00:05:29.326 CC lib/util/fd_group.o 00:05:29.326 CC lib/vfio_user/host/vfio_user.o 00:05:29.326 CC lib/util/file.o 00:05:29.326 LIB libspdk_dma.a 00:05:29.326 LIB libspdk_ioat.a 00:05:29.326 SO libspdk_ioat.so.7.0 00:05:29.326 SO libspdk_dma.so.5.0 00:05:29.326 CC lib/util/hexlify.o 00:05:29.326 SYMLINK libspdk_ioat.so 00:05:29.326 SYMLINK libspdk_dma.so 00:05:29.326 CC lib/util/iov.o 00:05:29.326 CC lib/util/math.o 00:05:29.326 CC lib/util/net.o 00:05:29.326 CC lib/util/pipe.o 00:05:29.326 CC lib/util/strerror_tls.o 00:05:29.326 CC lib/util/string.o 00:05:29.326 CC lib/util/uuid.o 00:05:29.326 CC lib/util/xor.o 00:05:29.326 CC lib/util/zipf.o 00:05:29.326 LIB libspdk_vfio_user.a 00:05:29.326 SO libspdk_vfio_user.so.5.0 00:05:29.326 CC lib/util/md5.o 00:05:29.326 SYMLINK libspdk_vfio_user.so 00:05:29.326 LIB libspdk_util.a 00:05:29.326 LIB libspdk_trace_parser.a 00:05:29.326 SO libspdk_util.so.10.1 00:05:29.326 SO libspdk_trace_parser.so.6.0 00:05:29.326 SYMLINK libspdk_trace_parser.so 00:05:29.326 SYMLINK libspdk_util.so 00:05:29.326 CC lib/json/json_parse.o 00:05:29.326 CC lib/json/json_util.o 00:05:29.326 CC lib/json/json_write.o 00:05:29.326 CC lib/vmd/vmd.o 00:05:29.326 CC lib/idxd/idxd.o 00:05:29.326 CC lib/vmd/led.o 00:05:29.326 CC lib/rdma_utils/rdma_utils.o 00:05:29.326 CC lib/idxd/idxd_user.o 00:05:29.326 CC lib/conf/conf.o 00:05:29.326 CC lib/env_dpdk/env.o 00:05:29.326 CC lib/env_dpdk/memory.o 00:05:29.326 CC lib/idxd/idxd_kernel.o 00:05:29.326 LIB libspdk_conf.a 00:05:29.326 SO libspdk_conf.so.6.0 00:05:29.326 CC lib/env_dpdk/pci.o 00:05:29.326 CC lib/env_dpdk/init.o 00:05:29.326 SYMLINK libspdk_conf.so 00:05:29.326 CC lib/env_dpdk/threads.o 00:05:29.326 LIB libspdk_rdma_utils.a 00:05:29.326 CC lib/env_dpdk/pci_ioat.o 00:05:29.326 LIB libspdk_json.a 00:05:29.326 SO libspdk_rdma_utils.so.1.0 00:05:29.326 SO libspdk_json.so.6.0 00:05:29.326 CC lib/env_dpdk/pci_virtio.o 00:05:29.326 SYMLINK libspdk_rdma_utils.so 00:05:29.326 CC lib/env_dpdk/pci_vmd.o 00:05:29.326 CC lib/env_dpdk/pci_idxd.o 00:05:29.326 SYMLINK libspdk_json.so 00:05:29.326 CC lib/env_dpdk/pci_event.o 00:05:29.326 CC lib/env_dpdk/sigbus_handler.o 00:05:29.326 CC lib/env_dpdk/pci_dpdk.o 00:05:29.326 LIB libspdk_idxd.a 00:05:29.326 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:29.326 SO libspdk_idxd.so.12.1 00:05:29.326 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:29.326 SYMLINK libspdk_idxd.so 00:05:29.326 LIB libspdk_vmd.a 00:05:29.326 SO libspdk_vmd.so.6.0 00:05:29.326 SYMLINK libspdk_vmd.so 00:05:29.326 CC lib/jsonrpc/jsonrpc_server.o 00:05:29.326 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:29.326 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:29.326 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:29.326 CC lib/jsonrpc/jsonrpc_client.o 00:05:29.326 CC lib/rdma_provider/common.o 00:05:29.584 LIB libspdk_rdma_provider.a 00:05:29.584 SO libspdk_rdma_provider.so.7.0 00:05:29.584 LIB libspdk_jsonrpc.a 00:05:29.584 SYMLINK libspdk_rdma_provider.so 00:05:29.842 SO libspdk_jsonrpc.so.6.0 00:05:29.842 SYMLINK libspdk_jsonrpc.so 00:05:30.100 CC lib/rpc/rpc.o 00:05:30.358 LIB libspdk_env_dpdk.a 00:05:30.358 LIB libspdk_rpc.a 00:05:30.358 SO libspdk_rpc.so.6.0 00:05:30.358 SO libspdk_env_dpdk.so.15.1 00:05:30.358 SYMLINK libspdk_rpc.so 00:05:30.648 SYMLINK libspdk_env_dpdk.so 00:05:30.648 CC lib/keyring/keyring.o 00:05:30.648 CC lib/keyring/keyring_rpc.o 00:05:30.648 CC lib/notify/notify.o 00:05:30.648 CC lib/notify/notify_rpc.o 00:05:30.648 CC lib/trace/trace.o 00:05:30.648 CC lib/trace/trace_flags.o 00:05:30.648 CC lib/trace/trace_rpc.o 00:05:30.916 LIB libspdk_notify.a 00:05:30.916 SO libspdk_notify.so.6.0 00:05:30.916 SYMLINK libspdk_notify.so 00:05:30.916 LIB libspdk_keyring.a 00:05:30.916 LIB libspdk_trace.a 00:05:30.916 SO libspdk_keyring.so.2.0 00:05:30.916 SO libspdk_trace.so.11.0 00:05:31.175 SYMLINK libspdk_keyring.so 00:05:31.175 SYMLINK libspdk_trace.so 00:05:31.433 CC lib/sock/sock.o 00:05:31.433 CC lib/sock/sock_rpc.o 00:05:31.433 CC lib/thread/thread.o 00:05:31.433 CC lib/thread/iobuf.o 00:05:32.377 LIB libspdk_sock.a 00:05:32.377 SO libspdk_sock.so.10.0 00:05:32.377 SYMLINK libspdk_sock.so 00:05:32.635 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:32.636 CC lib/nvme/nvme_ctrlr.o 00:05:32.636 CC lib/nvme/nvme_fabric.o 00:05:32.636 CC lib/nvme/nvme_ns_cmd.o 00:05:32.636 CC lib/nvme/nvme_ns.o 00:05:32.636 CC lib/nvme/nvme_pcie_common.o 00:05:32.636 CC lib/nvme/nvme_pcie.o 00:05:32.636 CC lib/nvme/nvme_qpair.o 00:05:32.636 CC lib/nvme/nvme.o 00:05:34.011 CC lib/nvme/nvme_quirks.o 00:05:34.011 CC lib/nvme/nvme_transport.o 00:05:34.011 CC lib/nvme/nvme_discovery.o 00:05:34.269 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:34.269 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:34.269 CC lib/nvme/nvme_tcp.o 00:05:34.269 LIB libspdk_thread.a 00:05:34.527 SO libspdk_thread.so.11.0 00:05:34.527 CC lib/nvme/nvme_opal.o 00:05:34.527 SYMLINK libspdk_thread.so 00:05:34.527 CC lib/nvme/nvme_io_msg.o 00:05:34.785 CC lib/accel/accel.o 00:05:35.044 CC lib/nvme/nvme_poll_group.o 00:05:35.303 CC lib/blob/blobstore.o 00:05:35.562 CC lib/init/json_config.o 00:05:35.562 CC lib/blob/request.o 00:05:35.562 CC lib/virtio/virtio.o 00:05:35.820 CC lib/fsdev/fsdev.o 00:05:35.820 CC lib/virtio/virtio_vhost_user.o 00:05:35.820 CC lib/fsdev/fsdev_io.o 00:05:36.079 CC lib/init/subsystem.o 00:05:36.079 CC lib/fsdev/fsdev_rpc.o 00:05:36.337 CC lib/accel/accel_rpc.o 00:05:36.337 CC lib/virtio/virtio_vfio_user.o 00:05:36.337 CC lib/virtio/virtio_pci.o 00:05:36.337 CC lib/accel/accel_sw.o 00:05:36.595 CC lib/init/subsystem_rpc.o 00:05:36.595 CC lib/nvme/nvme_zns.o 00:05:36.853 CC lib/nvme/nvme_stubs.o 00:05:36.853 CC lib/init/rpc.o 00:05:36.853 CC lib/blob/zeroes.o 00:05:36.853 LIB libspdk_virtio.a 00:05:36.853 SO libspdk_virtio.so.7.0 00:05:37.111 CC lib/nvme/nvme_auth.o 00:05:37.111 SYMLINK libspdk_virtio.so 00:05:37.111 CC lib/nvme/nvme_cuse.o 00:05:37.111 LIB libspdk_fsdev.a 00:05:37.111 LIB libspdk_init.a 00:05:37.111 SO libspdk_fsdev.so.2.0 00:05:37.111 SO libspdk_init.so.6.0 00:05:37.111 CC lib/nvme/nvme_rdma.o 00:05:37.111 SYMLINK libspdk_fsdev.so 00:05:37.111 SYMLINK libspdk_init.so 00:05:37.111 CC lib/blob/blob_bs_dev.o 00:05:37.369 LIB libspdk_accel.a 00:05:37.369 SO libspdk_accel.so.16.0 00:05:37.369 SYMLINK libspdk_accel.so 00:05:37.369 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:37.369 CC lib/event/app.o 00:05:37.369 CC lib/event/reactor.o 00:05:37.369 CC lib/event/log_rpc.o 00:05:37.627 CC lib/event/app_rpc.o 00:05:37.627 CC lib/event/scheduler_static.o 00:05:37.627 CC lib/bdev/bdev.o 00:05:37.885 CC lib/bdev/bdev_rpc.o 00:05:38.202 CC lib/bdev/bdev_zone.o 00:05:38.202 CC lib/bdev/part.o 00:05:38.202 CC lib/bdev/scsi_nvme.o 00:05:38.202 LIB libspdk_fuse_dispatcher.a 00:05:38.460 SO libspdk_fuse_dispatcher.so.1.0 00:05:38.460 SYMLINK libspdk_fuse_dispatcher.so 00:05:38.460 LIB libspdk_event.a 00:05:38.460 SO libspdk_event.so.14.0 00:05:38.718 SYMLINK libspdk_event.so 00:05:39.285 LIB libspdk_nvme.a 00:05:39.285 SO libspdk_nvme.so.15.0 00:05:39.852 SYMLINK libspdk_nvme.so 00:05:40.786 LIB libspdk_blob.a 00:05:40.786 SO libspdk_blob.so.11.0 00:05:41.044 SYMLINK libspdk_blob.so 00:05:41.044 CC lib/blobfs/blobfs.o 00:05:41.044 CC lib/blobfs/tree.o 00:05:41.044 CC lib/lvol/lvol.o 00:05:41.611 LIB libspdk_bdev.a 00:05:41.611 SO libspdk_bdev.so.17.0 00:05:41.869 SYMLINK libspdk_bdev.so 00:05:42.128 CC lib/nbd/nbd.o 00:05:42.128 CC lib/nbd/nbd_rpc.o 00:05:42.128 CC lib/ftl/ftl_core.o 00:05:42.128 CC lib/ftl/ftl_init.o 00:05:42.128 CC lib/ublk/ublk.o 00:05:42.128 CC lib/ftl/ftl_layout.o 00:05:42.128 CC lib/nvmf/ctrlr.o 00:05:42.128 CC lib/scsi/dev.o 00:05:42.386 CC lib/ftl/ftl_debug.o 00:05:42.386 CC lib/scsi/lun.o 00:05:42.386 LIB libspdk_blobfs.a 00:05:42.644 SO libspdk_blobfs.so.10.0 00:05:42.644 LIB libspdk_lvol.a 00:05:42.644 SO libspdk_lvol.so.10.0 00:05:42.644 CC lib/ublk/ublk_rpc.o 00:05:42.644 SYMLINK libspdk_blobfs.so 00:05:42.644 CC lib/nvmf/ctrlr_discovery.o 00:05:42.644 CC lib/nvmf/ctrlr_bdev.o 00:05:42.644 SYMLINK libspdk_lvol.so 00:05:42.644 CC lib/scsi/port.o 00:05:42.903 CC lib/scsi/scsi.o 00:05:42.903 CC lib/ftl/ftl_io.o 00:05:42.903 CC lib/nvmf/subsystem.o 00:05:43.233 CC lib/nvmf/nvmf.o 00:05:43.233 LIB libspdk_nbd.a 00:05:43.233 CC lib/ftl/ftl_sb.o 00:05:43.233 SO libspdk_nbd.so.7.0 00:05:43.233 CC lib/scsi/scsi_bdev.o 00:05:43.233 SYMLINK libspdk_nbd.so 00:05:43.233 CC lib/ftl/ftl_l2p.o 00:05:43.491 LIB libspdk_ublk.a 00:05:43.491 CC lib/ftl/ftl_l2p_flat.o 00:05:43.491 CC lib/ftl/ftl_nv_cache.o 00:05:43.491 SO libspdk_ublk.so.3.0 00:05:43.749 SYMLINK libspdk_ublk.so 00:05:43.749 CC lib/nvmf/nvmf_rpc.o 00:05:43.749 CC lib/scsi/scsi_pr.o 00:05:43.749 CC lib/ftl/ftl_band.o 00:05:43.749 CC lib/ftl/ftl_band_ops.o 00:05:44.008 CC lib/scsi/scsi_rpc.o 00:05:44.266 CC lib/ftl/ftl_writer.o 00:05:44.266 CC lib/ftl/ftl_rq.o 00:05:44.266 CC lib/scsi/task.o 00:05:44.523 CC lib/ftl/ftl_reloc.o 00:05:44.523 CC lib/nvmf/transport.o 00:05:44.523 CC lib/ftl/ftl_l2p_cache.o 00:05:44.523 CC lib/nvmf/tcp.o 00:05:44.780 CC lib/ftl/ftl_p2l.o 00:05:44.780 LIB libspdk_scsi.a 00:05:44.780 CC lib/nvmf/stubs.o 00:05:44.780 CC lib/ftl/ftl_p2l_log.o 00:05:44.780 SO libspdk_scsi.so.9.0 00:05:45.037 SYMLINK libspdk_scsi.so 00:05:45.037 CC lib/ftl/mngt/ftl_mngt.o 00:05:45.295 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:45.553 CC lib/nvmf/mdns_server.o 00:05:45.553 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:45.553 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:45.811 CC lib/iscsi/conn.o 00:05:45.811 CC lib/iscsi/init_grp.o 00:05:45.811 CC lib/iscsi/iscsi.o 00:05:45.811 CC lib/iscsi/param.o 00:05:46.069 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:46.069 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:46.069 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:46.069 CC lib/iscsi/portal_grp.o 00:05:46.328 CC lib/vhost/vhost.o 00:05:46.328 CC lib/iscsi/tgt_node.o 00:05:46.586 CC lib/vhost/vhost_rpc.o 00:05:46.586 CC lib/vhost/vhost_scsi.o 00:05:46.586 CC lib/nvmf/rdma.o 00:05:46.586 CC lib/iscsi/iscsi_subsystem.o 00:05:46.586 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:46.844 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:46.844 CC lib/iscsi/iscsi_rpc.o 00:05:47.102 CC lib/nvmf/auth.o 00:05:47.102 CC lib/iscsi/task.o 00:05:47.359 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:47.359 CC lib/vhost/vhost_blk.o 00:05:47.359 CC lib/vhost/rte_vhost_user.o 00:05:47.359 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:47.617 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:47.617 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:47.617 CC lib/ftl/utils/ftl_conf.o 00:05:47.617 CC lib/ftl/utils/ftl_md.o 00:05:47.875 LIB libspdk_iscsi.a 00:05:47.875 CC lib/ftl/utils/ftl_mempool.o 00:05:47.875 SO libspdk_iscsi.so.8.0 00:05:47.875 CC lib/ftl/utils/ftl_bitmap.o 00:05:48.134 CC lib/ftl/utils/ftl_property.o 00:05:48.134 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:48.134 SYMLINK libspdk_iscsi.so 00:05:48.134 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:48.134 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:48.134 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:48.134 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:48.391 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:48.391 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:48.391 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:48.391 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:48.391 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:48.391 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:48.391 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:48.391 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:48.649 CC lib/ftl/base/ftl_base_dev.o 00:05:48.649 CC lib/ftl/base/ftl_base_bdev.o 00:05:48.649 CC lib/ftl/ftl_trace.o 00:05:48.908 LIB libspdk_vhost.a 00:05:48.908 LIB libspdk_ftl.a 00:05:48.908 SO libspdk_vhost.so.8.0 00:05:49.167 SYMLINK libspdk_vhost.so 00:05:49.167 SO libspdk_ftl.so.9.0 00:05:49.426 LIB libspdk_nvmf.a 00:05:49.684 SYMLINK libspdk_ftl.so 00:05:49.684 SO libspdk_nvmf.so.20.0 00:05:49.943 SYMLINK libspdk_nvmf.so 00:05:50.510 CC module/env_dpdk/env_dpdk_rpc.o 00:05:50.510 CC module/sock/posix/posix.o 00:05:50.510 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:50.510 CC module/scheduler/gscheduler/gscheduler.o 00:05:50.510 CC module/keyring/file/keyring.o 00:05:50.510 CC module/accel/error/accel_error.o 00:05:50.510 CC module/fsdev/aio/fsdev_aio.o 00:05:50.510 CC module/blob/bdev/blob_bdev.o 00:05:50.510 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:50.510 CC module/accel/ioat/accel_ioat.o 00:05:50.510 LIB libspdk_env_dpdk_rpc.a 00:05:50.510 SO libspdk_env_dpdk_rpc.so.6.0 00:05:50.767 LIB libspdk_scheduler_dpdk_governor.a 00:05:50.767 SYMLINK libspdk_env_dpdk_rpc.so 00:05:50.768 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:50.768 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:50.768 LIB libspdk_scheduler_gscheduler.a 00:05:50.768 SO libspdk_scheduler_gscheduler.so.4.0 00:05:50.768 CC module/accel/ioat/accel_ioat_rpc.o 00:05:50.768 CC module/accel/error/accel_error_rpc.o 00:05:50.768 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:50.768 CC module/keyring/file/keyring_rpc.o 00:05:50.768 SYMLINK libspdk_scheduler_gscheduler.so 00:05:50.768 CC module/fsdev/aio/linux_aio_mgr.o 00:05:51.026 LIB libspdk_scheduler_dynamic.a 00:05:51.026 LIB libspdk_accel_ioat.a 00:05:51.026 LIB libspdk_accel_error.a 00:05:51.026 LIB libspdk_blob_bdev.a 00:05:51.026 SO libspdk_scheduler_dynamic.so.4.0 00:05:51.026 SO libspdk_accel_error.so.2.0 00:05:51.026 SO libspdk_accel_ioat.so.6.0 00:05:51.026 SO libspdk_blob_bdev.so.11.0 00:05:51.026 LIB libspdk_keyring_file.a 00:05:51.026 CC module/keyring/linux/keyring.o 00:05:51.026 SO libspdk_keyring_file.so.2.0 00:05:51.026 SYMLINK libspdk_scheduler_dynamic.so 00:05:51.026 SYMLINK libspdk_accel_error.so 00:05:51.026 CC module/keyring/linux/keyring_rpc.o 00:05:51.026 SYMLINK libspdk_blob_bdev.so 00:05:51.026 SYMLINK libspdk_accel_ioat.so 00:05:51.026 SYMLINK libspdk_keyring_file.so 00:05:51.284 CC module/accel/dsa/accel_dsa.o 00:05:51.284 CC module/accel/dsa/accel_dsa_rpc.o 00:05:51.284 LIB libspdk_keyring_linux.a 00:05:51.284 SO libspdk_keyring_linux.so.1.0 00:05:51.284 CC module/accel/iaa/accel_iaa.o 00:05:51.284 SYMLINK libspdk_keyring_linux.so 00:05:51.284 CC module/accel/iaa/accel_iaa_rpc.o 00:05:51.542 CC module/bdev/gpt/gpt.o 00:05:51.542 CC module/bdev/delay/vbdev_delay.o 00:05:51.542 CC module/bdev/error/vbdev_error.o 00:05:51.542 CC module/blobfs/bdev/blobfs_bdev.o 00:05:51.542 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:51.542 LIB libspdk_accel_iaa.a 00:05:51.801 LIB libspdk_fsdev_aio.a 00:05:51.801 CC module/bdev/lvol/vbdev_lvol.o 00:05:51.801 SO libspdk_accel_iaa.so.3.0 00:05:51.801 SO libspdk_fsdev_aio.so.1.0 00:05:51.801 LIB libspdk_sock_posix.a 00:05:51.801 LIB libspdk_accel_dsa.a 00:05:51.801 SYMLINK libspdk_accel_iaa.so 00:05:51.801 SO libspdk_sock_posix.so.6.0 00:05:51.801 SYMLINK libspdk_fsdev_aio.so 00:05:51.801 SO libspdk_accel_dsa.so.5.0 00:05:51.801 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:51.801 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:51.801 SYMLINK libspdk_accel_dsa.so 00:05:51.801 CC module/bdev/gpt/vbdev_gpt.o 00:05:51.801 CC module/bdev/error/vbdev_error_rpc.o 00:05:52.058 SYMLINK libspdk_sock_posix.so 00:05:52.058 LIB libspdk_blobfs_bdev.a 00:05:52.058 LIB libspdk_bdev_delay.a 00:05:52.058 SO libspdk_blobfs_bdev.so.6.0 00:05:52.058 CC module/bdev/malloc/bdev_malloc.o 00:05:52.058 SO libspdk_bdev_delay.so.6.0 00:05:52.058 CC module/bdev/null/bdev_null.o 00:05:52.058 CC module/bdev/nvme/bdev_nvme.o 00:05:52.317 LIB libspdk_bdev_error.a 00:05:52.317 SYMLINK libspdk_blobfs_bdev.so 00:05:52.317 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:52.317 SO libspdk_bdev_error.so.6.0 00:05:52.317 CC module/bdev/passthru/vbdev_passthru.o 00:05:52.317 LIB libspdk_bdev_gpt.a 00:05:52.317 SYMLINK libspdk_bdev_delay.so 00:05:52.317 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:52.317 SO libspdk_bdev_gpt.so.6.0 00:05:52.317 SYMLINK libspdk_bdev_error.so 00:05:52.317 SYMLINK libspdk_bdev_gpt.so 00:05:52.575 CC module/bdev/null/bdev_null_rpc.o 00:05:52.575 LIB libspdk_bdev_lvol.a 00:05:52.575 CC module/bdev/raid/bdev_raid.o 00:05:52.575 CC module/bdev/split/vbdev_split.o 00:05:52.575 SO libspdk_bdev_lvol.so.6.0 00:05:52.575 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:52.575 LIB libspdk_bdev_passthru.a 00:05:52.834 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:52.834 CC module/bdev/xnvme/bdev_xnvme.o 00:05:52.834 SO libspdk_bdev_passthru.so.6.0 00:05:52.834 SYMLINK libspdk_bdev_lvol.so 00:05:52.834 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:52.834 LIB libspdk_bdev_null.a 00:05:52.834 SYMLINK libspdk_bdev_passthru.so 00:05:52.834 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:05:52.834 SO libspdk_bdev_null.so.6.0 00:05:52.834 LIB libspdk_bdev_malloc.a 00:05:52.834 SYMLINK libspdk_bdev_null.so 00:05:52.834 SO libspdk_bdev_malloc.so.6.0 00:05:52.834 CC module/bdev/split/vbdev_split_rpc.o 00:05:53.092 CC module/bdev/raid/bdev_raid_rpc.o 00:05:53.092 SYMLINK libspdk_bdev_malloc.so 00:05:53.092 CC module/bdev/raid/bdev_raid_sb.o 00:05:53.092 CC module/bdev/nvme/nvme_rpc.o 00:05:53.092 CC module/bdev/aio/bdev_aio.o 00:05:53.092 LIB libspdk_bdev_split.a 00:05:53.350 CC module/bdev/ftl/bdev_ftl.o 00:05:53.350 SO libspdk_bdev_split.so.6.0 00:05:53.350 CC module/bdev/raid/raid0.o 00:05:53.350 SYMLINK libspdk_bdev_split.so 00:05:53.350 LIB libspdk_bdev_xnvme.a 00:05:53.350 LIB libspdk_bdev_zone_block.a 00:05:53.350 SO libspdk_bdev_zone_block.so.6.0 00:05:53.350 SO libspdk_bdev_xnvme.so.3.0 00:05:53.608 SYMLINK libspdk_bdev_xnvme.so 00:05:53.608 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:53.608 SYMLINK libspdk_bdev_zone_block.so 00:05:53.608 CC module/bdev/nvme/bdev_mdns_client.o 00:05:53.608 CC module/bdev/aio/bdev_aio_rpc.o 00:05:53.608 CC module/bdev/iscsi/bdev_iscsi.o 00:05:53.608 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:53.608 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:53.608 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:53.608 CC module/bdev/raid/raid1.o 00:05:53.866 LIB libspdk_bdev_aio.a 00:05:53.866 SO libspdk_bdev_aio.so.6.0 00:05:53.866 CC module/bdev/nvme/vbdev_opal.o 00:05:53.866 LIB libspdk_bdev_ftl.a 00:05:53.866 SYMLINK libspdk_bdev_aio.so 00:05:53.866 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:53.866 SO libspdk_bdev_ftl.so.6.0 00:05:54.124 SYMLINK libspdk_bdev_ftl.so 00:05:54.124 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:54.124 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:54.124 CC module/bdev/raid/concat.o 00:05:54.124 LIB libspdk_bdev_iscsi.a 00:05:54.383 SO libspdk_bdev_iscsi.so.6.0 00:05:54.383 SYMLINK libspdk_bdev_iscsi.so 00:05:54.383 LIB libspdk_bdev_raid.a 00:05:54.383 SO libspdk_bdev_raid.so.6.0 00:05:54.641 SYMLINK libspdk_bdev_raid.so 00:05:54.641 LIB libspdk_bdev_virtio.a 00:05:54.641 SO libspdk_bdev_virtio.so.6.0 00:05:54.641 SYMLINK libspdk_bdev_virtio.so 00:05:56.540 LIB libspdk_bdev_nvme.a 00:05:56.540 SO libspdk_bdev_nvme.so.7.1 00:05:56.798 SYMLINK libspdk_bdev_nvme.so 00:05:57.056 CC module/event/subsystems/fsdev/fsdev.o 00:05:57.056 CC module/event/subsystems/vmd/vmd.o 00:05:57.056 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:57.056 CC module/event/subsystems/keyring/keyring.o 00:05:57.056 CC module/event/subsystems/iobuf/iobuf.o 00:05:57.056 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:57.056 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:57.314 CC module/event/subsystems/sock/sock.o 00:05:57.314 CC module/event/subsystems/scheduler/scheduler.o 00:05:57.314 LIB libspdk_event_vhost_blk.a 00:05:57.314 LIB libspdk_event_keyring.a 00:05:57.314 SO libspdk_event_vhost_blk.so.3.0 00:05:57.314 SO libspdk_event_keyring.so.1.0 00:05:57.314 LIB libspdk_event_scheduler.a 00:05:57.315 SO libspdk_event_scheduler.so.4.0 00:05:57.315 LIB libspdk_event_sock.a 00:05:57.315 LIB libspdk_event_iobuf.a 00:05:57.573 SYMLINK libspdk_event_vhost_blk.so 00:05:57.573 SYMLINK libspdk_event_keyring.so 00:05:57.573 LIB libspdk_event_vmd.a 00:05:57.573 LIB libspdk_event_fsdev.a 00:05:57.573 SO libspdk_event_sock.so.5.0 00:05:57.573 SO libspdk_event_iobuf.so.3.0 00:05:57.573 SO libspdk_event_fsdev.so.1.0 00:05:57.573 SO libspdk_event_vmd.so.6.0 00:05:57.573 SYMLINK libspdk_event_scheduler.so 00:05:57.573 SYMLINK libspdk_event_iobuf.so 00:05:57.573 SYMLINK libspdk_event_sock.so 00:05:57.573 SYMLINK libspdk_event_fsdev.so 00:05:57.573 SYMLINK libspdk_event_vmd.so 00:05:57.831 CC module/event/subsystems/accel/accel.o 00:05:58.190 LIB libspdk_event_accel.a 00:05:58.190 SO libspdk_event_accel.so.6.0 00:05:58.190 SYMLINK libspdk_event_accel.so 00:05:58.460 CC module/event/subsystems/bdev/bdev.o 00:05:58.719 LIB libspdk_event_bdev.a 00:05:58.719 SO libspdk_event_bdev.so.6.0 00:05:58.719 SYMLINK libspdk_event_bdev.so 00:05:58.978 CC module/event/subsystems/scsi/scsi.o 00:05:58.978 CC module/event/subsystems/ublk/ublk.o 00:05:58.978 CC module/event/subsystems/nbd/nbd.o 00:05:58.978 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:58.978 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:59.237 LIB libspdk_event_nbd.a 00:05:59.237 LIB libspdk_event_ublk.a 00:05:59.237 SO libspdk_event_nbd.so.6.0 00:05:59.237 SO libspdk_event_ublk.so.3.0 00:05:59.237 LIB libspdk_event_scsi.a 00:05:59.237 SO libspdk_event_scsi.so.6.0 00:05:59.495 SYMLINK libspdk_event_nbd.so 00:05:59.495 SYMLINK libspdk_event_ublk.so 00:05:59.495 SYMLINK libspdk_event_scsi.so 00:05:59.495 LIB libspdk_event_nvmf.a 00:05:59.495 SO libspdk_event_nvmf.so.6.0 00:05:59.495 SYMLINK libspdk_event_nvmf.so 00:05:59.753 CC module/event/subsystems/iscsi/iscsi.o 00:05:59.753 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:59.753 LIB libspdk_event_vhost_scsi.a 00:05:59.753 SO libspdk_event_vhost_scsi.so.3.0 00:06:00.017 LIB libspdk_event_iscsi.a 00:06:00.017 SYMLINK libspdk_event_vhost_scsi.so 00:06:00.017 SO libspdk_event_iscsi.so.6.0 00:06:00.017 SYMLINK libspdk_event_iscsi.so 00:06:00.279 SO libspdk.so.6.0 00:06:00.279 SYMLINK libspdk.so 00:06:00.538 CXX app/trace/trace.o 00:06:00.538 CC app/trace_record/trace_record.o 00:06:00.538 TEST_HEADER include/spdk/accel.h 00:06:00.538 TEST_HEADER include/spdk/accel_module.h 00:06:00.538 TEST_HEADER include/spdk/assert.h 00:06:00.538 TEST_HEADER include/spdk/barrier.h 00:06:00.538 TEST_HEADER include/spdk/base64.h 00:06:00.538 TEST_HEADER include/spdk/bdev.h 00:06:00.538 TEST_HEADER include/spdk/bdev_module.h 00:06:00.538 TEST_HEADER include/spdk/bdev_zone.h 00:06:00.538 TEST_HEADER include/spdk/bit_array.h 00:06:00.538 TEST_HEADER include/spdk/bit_pool.h 00:06:00.538 TEST_HEADER include/spdk/blob_bdev.h 00:06:00.538 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:00.538 TEST_HEADER include/spdk/blobfs.h 00:06:00.538 TEST_HEADER include/spdk/blob.h 00:06:00.538 TEST_HEADER include/spdk/conf.h 00:06:00.538 TEST_HEADER include/spdk/config.h 00:06:00.538 TEST_HEADER include/spdk/cpuset.h 00:06:00.538 TEST_HEADER include/spdk/crc16.h 00:06:00.538 TEST_HEADER include/spdk/crc32.h 00:06:00.538 CC app/iscsi_tgt/iscsi_tgt.o 00:06:00.538 TEST_HEADER include/spdk/crc64.h 00:06:00.538 TEST_HEADER include/spdk/dif.h 00:06:00.538 TEST_HEADER include/spdk/dma.h 00:06:00.538 CC app/nvmf_tgt/nvmf_main.o 00:06:00.538 TEST_HEADER include/spdk/endian.h 00:06:00.538 TEST_HEADER include/spdk/env_dpdk.h 00:06:00.538 TEST_HEADER include/spdk/env.h 00:06:00.538 TEST_HEADER include/spdk/event.h 00:06:00.538 TEST_HEADER include/spdk/fd_group.h 00:06:00.538 TEST_HEADER include/spdk/fd.h 00:06:00.538 TEST_HEADER include/spdk/file.h 00:06:00.538 TEST_HEADER include/spdk/fsdev.h 00:06:00.538 TEST_HEADER include/spdk/fsdev_module.h 00:06:00.538 CC app/spdk_tgt/spdk_tgt.o 00:06:00.538 TEST_HEADER include/spdk/ftl.h 00:06:00.538 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:00.538 TEST_HEADER include/spdk/gpt_spec.h 00:06:00.538 TEST_HEADER include/spdk/hexlify.h 00:06:00.538 TEST_HEADER include/spdk/histogram_data.h 00:06:00.538 TEST_HEADER include/spdk/idxd.h 00:06:00.538 TEST_HEADER include/spdk/idxd_spec.h 00:06:00.538 TEST_HEADER include/spdk/init.h 00:06:00.538 CC test/thread/poller_perf/poller_perf.o 00:06:00.538 TEST_HEADER include/spdk/ioat.h 00:06:00.538 TEST_HEADER include/spdk/ioat_spec.h 00:06:00.538 CC examples/util/zipf/zipf.o 00:06:00.538 TEST_HEADER include/spdk/iscsi_spec.h 00:06:00.796 TEST_HEADER include/spdk/json.h 00:06:00.796 TEST_HEADER include/spdk/jsonrpc.h 00:06:00.796 TEST_HEADER include/spdk/keyring.h 00:06:00.796 TEST_HEADER include/spdk/keyring_module.h 00:06:00.796 CC test/app/bdev_svc/bdev_svc.o 00:06:00.796 TEST_HEADER include/spdk/likely.h 00:06:00.796 TEST_HEADER include/spdk/log.h 00:06:00.796 TEST_HEADER include/spdk/lvol.h 00:06:00.796 CC test/dma/test_dma/test_dma.o 00:06:00.796 TEST_HEADER include/spdk/md5.h 00:06:00.796 TEST_HEADER include/spdk/memory.h 00:06:00.796 TEST_HEADER include/spdk/mmio.h 00:06:00.796 TEST_HEADER include/spdk/nbd.h 00:06:00.796 TEST_HEADER include/spdk/net.h 00:06:00.796 TEST_HEADER include/spdk/notify.h 00:06:00.796 TEST_HEADER include/spdk/nvme.h 00:06:00.796 TEST_HEADER include/spdk/nvme_intel.h 00:06:00.796 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:00.796 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:00.796 TEST_HEADER include/spdk/nvme_spec.h 00:06:00.796 TEST_HEADER include/spdk/nvme_zns.h 00:06:00.796 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:00.796 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:00.796 TEST_HEADER include/spdk/nvmf.h 00:06:00.796 TEST_HEADER include/spdk/nvmf_spec.h 00:06:00.796 TEST_HEADER include/spdk/nvmf_transport.h 00:06:00.796 TEST_HEADER include/spdk/opal.h 00:06:00.796 TEST_HEADER include/spdk/opal_spec.h 00:06:00.796 TEST_HEADER include/spdk/pci_ids.h 00:06:00.796 TEST_HEADER include/spdk/pipe.h 00:06:00.796 TEST_HEADER include/spdk/queue.h 00:06:00.796 TEST_HEADER include/spdk/reduce.h 00:06:00.796 TEST_HEADER include/spdk/rpc.h 00:06:00.796 TEST_HEADER include/spdk/scheduler.h 00:06:00.796 TEST_HEADER include/spdk/scsi.h 00:06:00.797 TEST_HEADER include/spdk/scsi_spec.h 00:06:00.797 TEST_HEADER include/spdk/sock.h 00:06:00.797 TEST_HEADER include/spdk/stdinc.h 00:06:00.797 TEST_HEADER include/spdk/string.h 00:06:00.797 TEST_HEADER include/spdk/thread.h 00:06:00.797 TEST_HEADER include/spdk/trace.h 00:06:01.055 TEST_HEADER include/spdk/trace_parser.h 00:06:01.055 TEST_HEADER include/spdk/tree.h 00:06:01.055 TEST_HEADER include/spdk/ublk.h 00:06:01.055 TEST_HEADER include/spdk/util.h 00:06:01.055 TEST_HEADER include/spdk/uuid.h 00:06:01.055 TEST_HEADER include/spdk/version.h 00:06:01.055 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:01.055 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:01.055 TEST_HEADER include/spdk/vhost.h 00:06:01.055 TEST_HEADER include/spdk/vmd.h 00:06:01.055 TEST_HEADER include/spdk/xor.h 00:06:01.055 TEST_HEADER include/spdk/zipf.h 00:06:01.056 LINK poller_perf 00:06:01.056 CXX test/cpp_headers/accel.o 00:06:01.056 LINK zipf 00:06:01.056 LINK nvmf_tgt 00:06:01.056 LINK iscsi_tgt 00:06:01.056 LINK spdk_trace_record 00:06:01.056 LINK bdev_svc 00:06:01.056 LINK spdk_tgt 00:06:01.314 LINK spdk_trace 00:06:01.314 CXX test/cpp_headers/accel_module.o 00:06:01.573 CC app/spdk_lspci/spdk_lspci.o 00:06:01.831 CC examples/ioat/perf/perf.o 00:06:01.831 CC examples/vmd/lsvmd/lsvmd.o 00:06:01.831 CC examples/vmd/led/led.o 00:06:01.831 CC test/env/mem_callbacks/mem_callbacks.o 00:06:01.831 CXX test/cpp_headers/assert.o 00:06:01.831 CC test/event/event_perf/event_perf.o 00:06:01.831 LINK test_dma 00:06:01.831 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:01.831 CC test/rpc_client/rpc_client_test.o 00:06:02.090 LINK spdk_lspci 00:06:02.090 LINK lsvmd 00:06:02.090 LINK led 00:06:02.090 LINK event_perf 00:06:02.090 CXX test/cpp_headers/barrier.o 00:06:02.348 LINK ioat_perf 00:06:02.348 LINK rpc_client_test 00:06:02.348 CC app/spdk_nvme_perf/perf.o 00:06:02.607 CC examples/ioat/verify/verify.o 00:06:02.607 CXX test/cpp_headers/base64.o 00:06:02.607 CXX test/cpp_headers/bdev.o 00:06:02.607 CC test/event/reactor/reactor.o 00:06:02.607 CC test/app/histogram_perf/histogram_perf.o 00:06:02.607 CC examples/idxd/perf/perf.o 00:06:02.866 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:02.866 LINK nvme_fuzz 00:06:02.866 LINK histogram_perf 00:06:02.866 LINK reactor 00:06:02.866 LINK mem_callbacks 00:06:02.866 CXX test/cpp_headers/bdev_module.o 00:06:03.125 LINK verify 00:06:03.125 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:03.125 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:03.125 CXX test/cpp_headers/bdev_zone.o 00:06:03.392 CC test/event/reactor_perf/reactor_perf.o 00:06:03.392 CXX test/cpp_headers/bit_array.o 00:06:03.392 CC test/env/vtophys/vtophys.o 00:06:03.392 LINK idxd_perf 00:06:03.393 CC test/event/app_repeat/app_repeat.o 00:06:03.652 LINK reactor_perf 00:06:03.652 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:03.652 CXX test/cpp_headers/bit_pool.o 00:06:03.652 LINK vtophys 00:06:03.652 LINK app_repeat 00:06:03.910 LINK interrupt_tgt 00:06:03.910 CXX test/cpp_headers/blob_bdev.o 00:06:03.910 CC test/accel/dif/dif.o 00:06:03.910 CC test/event/scheduler/scheduler.o 00:06:04.169 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:04.169 CC test/blobfs/mkfs/mkfs.o 00:06:04.169 CC test/app/jsoncat/jsoncat.o 00:06:04.169 LINK vhost_fuzz 00:06:04.169 CXX test/cpp_headers/blobfs_bdev.o 00:06:04.427 LINK mkfs 00:06:04.427 LINK jsoncat 00:06:04.427 CXX test/cpp_headers/blobfs.o 00:06:04.427 LINK env_dpdk_post_init 00:06:04.427 LINK spdk_nvme_perf 00:06:04.427 LINK scheduler 00:06:04.686 CC examples/thread/thread/thread_ex.o 00:06:04.686 CXX test/cpp_headers/blob.o 00:06:04.945 CC app/spdk_nvme_identify/identify.o 00:06:04.945 CC examples/sock/hello_world/hello_sock.o 00:06:04.945 CC test/env/memory/memory_ut.o 00:06:04.945 CXX test/cpp_headers/conf.o 00:06:05.205 CC test/nvme/aer/aer.o 00:06:05.205 LINK thread 00:06:05.205 CC test/lvol/esnap/esnap.o 00:06:05.205 CC test/nvme/reset/reset.o 00:06:05.205 CXX test/cpp_headers/config.o 00:06:05.205 CXX test/cpp_headers/cpuset.o 00:06:05.464 LINK hello_sock 00:06:05.464 LINK dif 00:06:05.724 CXX test/cpp_headers/crc16.o 00:06:05.724 CXX test/cpp_headers/crc32.o 00:06:05.724 CC test/app/stub/stub.o 00:06:05.724 LINK aer 00:06:05.724 LINK reset 00:06:05.724 CC examples/accel/perf/accel_perf.o 00:06:05.982 LINK stub 00:06:05.982 CXX test/cpp_headers/crc64.o 00:06:05.982 CC test/nvme/sgl/sgl.o 00:06:06.240 CC test/nvme/e2edp/nvme_dp.o 00:06:06.240 LINK iscsi_fuzz 00:06:06.240 CC test/bdev/bdevio/bdevio.o 00:06:06.240 CXX test/cpp_headers/dif.o 00:06:06.499 CC test/nvme/overhead/overhead.o 00:06:06.499 LINK sgl 00:06:06.499 CXX test/cpp_headers/dma.o 00:06:06.759 LINK memory_ut 00:06:06.759 LINK nvme_dp 00:06:06.759 CXX test/cpp_headers/endian.o 00:06:06.759 LINK spdk_nvme_identify 00:06:06.759 LINK overhead 00:06:06.759 LINK accel_perf 00:06:07.019 LINK bdevio 00:06:07.019 CC examples/nvme/hello_world/hello_world.o 00:06:07.019 CXX test/cpp_headers/env_dpdk.o 00:06:07.019 CC examples/blob/hello_world/hello_blob.o 00:06:07.019 CC examples/nvme/reconnect/reconnect.o 00:06:07.278 CC test/env/pci/pci_ut.o 00:06:07.278 CC app/spdk_nvme_discover/discovery_aer.o 00:06:07.278 CC test/nvme/err_injection/err_injection.o 00:06:07.278 CC test/nvme/startup/startup.o 00:06:07.278 CXX test/cpp_headers/env.o 00:06:07.278 LINK hello_blob 00:06:07.536 LINK hello_world 00:06:07.536 LINK startup 00:06:07.536 LINK err_injection 00:06:07.536 LINK spdk_nvme_discover 00:06:07.536 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:07.794 CXX test/cpp_headers/event.o 00:06:07.794 LINK reconnect 00:06:08.052 CC examples/blob/cli/blobcli.o 00:06:08.052 CXX test/cpp_headers/fd_group.o 00:06:08.052 CC test/nvme/reserve/reserve.o 00:06:08.052 CC examples/bdev/hello_world/hello_bdev.o 00:06:08.052 CC examples/bdev/bdevperf/bdevperf.o 00:06:08.052 CC app/spdk_top/spdk_top.o 00:06:08.052 LINK pci_ut 00:06:08.311 LINK hello_fsdev 00:06:08.311 CXX test/cpp_headers/fd.o 00:06:08.311 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:08.569 LINK reserve 00:06:08.569 LINK hello_bdev 00:06:08.569 CXX test/cpp_headers/file.o 00:06:08.828 CC examples/nvme/arbitration/arbitration.o 00:06:08.828 CC test/nvme/simple_copy/simple_copy.o 00:06:08.828 CC examples/nvme/hotplug/hotplug.o 00:06:09.087 CXX test/cpp_headers/fsdev.o 00:06:09.087 CXX test/cpp_headers/fsdev_module.o 00:06:09.087 LINK blobcli 00:06:09.346 LINK simple_copy 00:06:09.346 LINK hotplug 00:06:09.605 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:09.605 CXX test/cpp_headers/ftl.o 00:06:09.605 CXX test/cpp_headers/fuse_dispatcher.o 00:06:09.605 LINK arbitration 00:06:09.605 LINK nvme_manage 00:06:09.864 CXX test/cpp_headers/gpt_spec.o 00:06:09.864 LINK bdevperf 00:06:09.864 CC test/nvme/connect_stress/connect_stress.o 00:06:09.864 LINK cmb_copy 00:06:09.864 CXX test/cpp_headers/hexlify.o 00:06:10.122 CC examples/nvme/abort/abort.o 00:06:10.122 CC test/nvme/boot_partition/boot_partition.o 00:06:10.122 CC app/vhost/vhost.o 00:06:10.122 CXX test/cpp_headers/histogram_data.o 00:06:10.122 CXX test/cpp_headers/idxd.o 00:06:10.380 LINK spdk_top 00:06:10.380 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:10.380 LINK connect_stress 00:06:10.380 CC test/nvme/compliance/nvme_compliance.o 00:06:10.380 LINK boot_partition 00:06:10.639 LINK vhost 00:06:10.639 CXX test/cpp_headers/idxd_spec.o 00:06:10.639 CXX test/cpp_headers/init.o 00:06:10.639 CXX test/cpp_headers/ioat.o 00:06:10.639 LINK pmr_persistence 00:06:10.897 CC app/spdk_dd/spdk_dd.o 00:06:10.897 LINK abort 00:06:10.897 CXX test/cpp_headers/ioat_spec.o 00:06:10.897 CXX test/cpp_headers/iscsi_spec.o 00:06:11.155 CC test/nvme/fused_ordering/fused_ordering.o 00:06:11.155 CXX test/cpp_headers/json.o 00:06:11.155 LINK nvme_compliance 00:06:11.155 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:11.471 CC app/fio/nvme/fio_plugin.o 00:06:11.471 CXX test/cpp_headers/jsonrpc.o 00:06:11.471 LINK fused_ordering 00:06:11.471 CC test/nvme/cuse/cuse.o 00:06:11.471 CC test/nvme/fdp/fdp.o 00:06:11.471 CXX test/cpp_headers/keyring.o 00:06:11.729 LINK doorbell_aers 00:06:11.729 LINK spdk_dd 00:06:11.729 CC examples/nvmf/nvmf/nvmf.o 00:06:11.729 CXX test/cpp_headers/keyring_module.o 00:06:11.729 CXX test/cpp_headers/likely.o 00:06:11.987 CXX test/cpp_headers/log.o 00:06:11.987 LINK nvmf 00:06:11.987 CXX test/cpp_headers/lvol.o 00:06:11.987 CXX test/cpp_headers/md5.o 00:06:11.987 CC app/fio/bdev/fio_plugin.o 00:06:12.245 CXX test/cpp_headers/memory.o 00:06:12.245 LINK fdp 00:06:12.245 CXX test/cpp_headers/mmio.o 00:06:12.245 CXX test/cpp_headers/nbd.o 00:06:12.245 CXX test/cpp_headers/net.o 00:06:12.245 CXX test/cpp_headers/notify.o 00:06:12.504 CXX test/cpp_headers/nvme.o 00:06:12.504 CXX test/cpp_headers/nvme_intel.o 00:06:12.504 CXX test/cpp_headers/nvme_ocssd.o 00:06:12.504 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:12.762 LINK spdk_nvme 00:06:12.762 CXX test/cpp_headers/nvme_spec.o 00:06:12.762 CXX test/cpp_headers/nvme_zns.o 00:06:12.762 CXX test/cpp_headers/nvmf_cmd.o 00:06:12.762 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:12.762 CXX test/cpp_headers/nvmf.o 00:06:12.762 CXX test/cpp_headers/nvmf_spec.o 00:06:13.020 CXX test/cpp_headers/nvmf_transport.o 00:06:13.020 LINK spdk_bdev 00:06:13.020 CXX test/cpp_headers/opal.o 00:06:13.020 CXX test/cpp_headers/opal_spec.o 00:06:13.278 CXX test/cpp_headers/pci_ids.o 00:06:13.278 CXX test/cpp_headers/pipe.o 00:06:13.278 CXX test/cpp_headers/queue.o 00:06:13.278 CXX test/cpp_headers/reduce.o 00:06:13.278 CXX test/cpp_headers/rpc.o 00:06:13.278 CXX test/cpp_headers/scheduler.o 00:06:13.278 CXX test/cpp_headers/scsi.o 00:06:13.537 CXX test/cpp_headers/scsi_spec.o 00:06:13.537 CXX test/cpp_headers/sock.o 00:06:13.537 CXX test/cpp_headers/stdinc.o 00:06:13.537 CXX test/cpp_headers/string.o 00:06:13.537 CXX test/cpp_headers/thread.o 00:06:13.537 CXX test/cpp_headers/trace.o 00:06:13.537 CXX test/cpp_headers/trace_parser.o 00:06:13.795 CXX test/cpp_headers/tree.o 00:06:13.795 CXX test/cpp_headers/ublk.o 00:06:13.795 LINK cuse 00:06:13.795 CXX test/cpp_headers/util.o 00:06:13.795 CXX test/cpp_headers/uuid.o 00:06:13.795 CXX test/cpp_headers/version.o 00:06:13.795 CXX test/cpp_headers/vfio_user_pci.o 00:06:13.795 CXX test/cpp_headers/vfio_user_spec.o 00:06:13.796 CXX test/cpp_headers/vhost.o 00:06:13.796 CXX test/cpp_headers/vmd.o 00:06:14.054 CXX test/cpp_headers/xor.o 00:06:14.054 CXX test/cpp_headers/zipf.o 00:06:15.958 LINK esnap 00:06:16.525 00:06:16.525 real 2m22.738s 00:06:16.525 user 14m5.505s 00:06:16.525 sys 2m14.137s 00:06:16.525 08:24:55 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:16.525 08:24:55 make -- common/autotest_common.sh@10 -- $ set +x 00:06:16.525 ************************************ 00:06:16.525 END TEST make 00:06:16.525 ************************************ 00:06:16.525 08:24:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:16.525 08:24:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:16.525 08:24:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:16.525 08:24:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:16.525 08:24:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:16.525 08:24:55 -- pm/common@44 -- $ pid=5331 00:06:16.525 08:24:55 -- pm/common@50 -- $ kill -TERM 5331 00:06:16.525 08:24:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:16.525 08:24:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:16.525 08:24:55 -- pm/common@44 -- $ pid=5332 00:06:16.525 08:24:55 -- pm/common@50 -- $ kill -TERM 5332 00:06:16.525 08:24:55 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:16.525 08:24:55 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:16.525 08:24:55 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.525 08:24:55 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.525 08:24:55 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.783 08:24:55 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.783 08:24:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.783 08:24:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.783 08:24:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.783 08:24:55 -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.783 08:24:55 -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.783 08:24:55 -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.783 08:24:55 -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.783 08:24:55 -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.783 08:24:55 -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.783 08:24:55 -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.783 08:24:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.783 08:24:55 -- scripts/common.sh@344 -- # case "$op" in 00:06:16.783 08:24:55 -- scripts/common.sh@345 -- # : 1 00:06:16.783 08:24:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.783 08:24:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.783 08:24:55 -- scripts/common.sh@365 -- # decimal 1 00:06:16.783 08:24:55 -- scripts/common.sh@353 -- # local d=1 00:06:16.783 08:24:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.783 08:24:55 -- scripts/common.sh@355 -- # echo 1 00:06:16.783 08:24:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.783 08:24:55 -- scripts/common.sh@366 -- # decimal 2 00:06:16.783 08:24:55 -- scripts/common.sh@353 -- # local d=2 00:06:16.783 08:24:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.783 08:24:55 -- scripts/common.sh@355 -- # echo 2 00:06:16.783 08:24:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.783 08:24:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.783 08:24:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.783 08:24:55 -- scripts/common.sh@368 -- # return 0 00:06:16.783 08:24:55 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.783 08:24:55 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.783 --rc genhtml_branch_coverage=1 00:06:16.783 --rc genhtml_function_coverage=1 00:06:16.783 --rc genhtml_legend=1 00:06:16.783 --rc geninfo_all_blocks=1 00:06:16.783 --rc geninfo_unexecuted_blocks=1 00:06:16.783 00:06:16.783 ' 00:06:16.783 08:24:55 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.783 --rc genhtml_branch_coverage=1 00:06:16.783 --rc genhtml_function_coverage=1 00:06:16.783 --rc genhtml_legend=1 00:06:16.783 --rc geninfo_all_blocks=1 00:06:16.783 --rc geninfo_unexecuted_blocks=1 00:06:16.783 00:06:16.783 ' 00:06:16.783 08:24:55 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.783 --rc genhtml_branch_coverage=1 00:06:16.783 --rc genhtml_function_coverage=1 00:06:16.783 --rc genhtml_legend=1 00:06:16.783 --rc geninfo_all_blocks=1 00:06:16.783 --rc geninfo_unexecuted_blocks=1 00:06:16.783 00:06:16.783 ' 00:06:16.783 08:24:55 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.783 --rc genhtml_branch_coverage=1 00:06:16.783 --rc genhtml_function_coverage=1 00:06:16.783 --rc genhtml_legend=1 00:06:16.783 --rc geninfo_all_blocks=1 00:06:16.783 --rc geninfo_unexecuted_blocks=1 00:06:16.783 00:06:16.783 ' 00:06:16.783 08:24:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:16.783 08:24:55 -- nvmf/common.sh@7 -- # uname -s 00:06:16.783 08:24:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.783 08:24:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.783 08:24:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.783 08:24:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.783 08:24:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.783 08:24:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.783 08:24:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.783 08:24:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.783 08:24:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.783 08:24:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.783 08:24:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b92d3187-dd6c-45ef-abdc-6bb81c9ac50e 00:06:16.783 08:24:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=b92d3187-dd6c-45ef-abdc-6bb81c9ac50e 00:06:16.783 08:24:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.783 08:24:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.783 08:24:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.783 08:24:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.783 08:24:55 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:16.783 08:24:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.783 08:24:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.783 08:24:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.783 08:24:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.783 08:24:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.783 08:24:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.783 08:24:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.783 08:24:55 -- paths/export.sh@5 -- # export PATH 00:06:16.784 08:24:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.784 08:24:55 -- nvmf/common.sh@51 -- # : 0 00:06:16.784 08:24:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.784 08:24:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.784 08:24:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.784 08:24:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.784 08:24:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.784 08:24:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.784 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.784 08:24:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.784 08:24:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.784 08:24:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.784 08:24:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:16.784 08:24:55 -- spdk/autotest.sh@32 -- # uname -s 00:06:16.784 08:24:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:16.784 08:24:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:16.784 08:24:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:16.784 08:24:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:16.784 08:24:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:16.784 08:24:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:16.784 08:24:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:16.784 08:24:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:16.784 08:24:55 -- spdk/autotest.sh@48 -- # udevadm_pid=55301 00:06:16.784 08:24:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:16.784 08:24:55 -- pm/common@17 -- # local monitor 00:06:16.784 08:24:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:16.784 08:24:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:16.784 08:24:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:16.784 08:24:55 -- pm/common@25 -- # sleep 1 00:06:16.784 08:24:55 -- pm/common@21 -- # date +%s 00:06:16.784 08:24:55 -- pm/common@21 -- # date +%s 00:06:16.784 08:24:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732004695 00:06:16.784 08:24:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732004695 00:06:16.784 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732004695_collect-vmstat.pm.log 00:06:16.784 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732004695_collect-cpu-load.pm.log 00:06:17.717 08:24:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:17.717 08:24:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:17.717 08:24:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.717 08:24:56 -- common/autotest_common.sh@10 -- # set +x 00:06:17.717 08:24:56 -- spdk/autotest.sh@59 -- # create_test_list 00:06:17.717 08:24:56 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:17.717 08:24:56 -- common/autotest_common.sh@10 -- # set +x 00:06:17.717 08:24:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:17.717 08:24:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:17.717 08:24:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:17.717 08:24:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:17.717 08:24:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:17.717 08:24:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:17.717 08:24:56 -- common/autotest_common.sh@1457 -- # uname 00:06:17.717 08:24:56 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:17.717 08:24:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:17.717 08:24:56 -- common/autotest_common.sh@1477 -- # uname 00:06:17.717 08:24:56 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:17.717 08:24:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:17.717 08:24:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:17.974 lcov: LCOV version 1.15 00:06:17.974 08:24:57 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:36.051 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:36.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:54.140 08:25:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:54.140 08:25:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.140 08:25:31 -- common/autotest_common.sh@10 -- # set +x 00:06:54.140 08:25:31 -- spdk/autotest.sh@78 -- # rm -f 00:06:54.140 08:25:31 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:54.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:54.140 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:54.140 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:54.140 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:06:54.140 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:06:54.140 08:25:32 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:54.140 08:25:32 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:54.140 08:25:32 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:54.140 08:25:32 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:54.140 08:25:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:54.140 08:25:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:54.140 08:25:32 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:54.140 08:25:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:54.140 08:25:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:54.140 08:25:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:54.140 08:25:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:54.140 08:25:32 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:54.140 08:25:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:54.140 08:25:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:54.140 08:25:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:54.140 08:25:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:54.140 08:25:32 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:54.140 08:25:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:54.140 08:25:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:54.140 08:25:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:54.140 08:25:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:54.140 08:25:32 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:54.140 08:25:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:54.140 08:25:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:54.140 08:25:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:54.140 08:25:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:54.140 08:25:32 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:54.140 08:25:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:54.140 08:25:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:54.140 08:25:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:54.140 08:25:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:54.140 08:25:32 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:54.140 08:25:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:54.140 08:25:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:54.140 08:25:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:54.140 08:25:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:54.140 08:25:32 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:54.140 08:25:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:54.140 08:25:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:54.140 08:25:32 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:54.140 08:25:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:54.140 08:25:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:54.140 08:25:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:54.140 08:25:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:54.140 08:25:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:54.140 No valid GPT data, bailing 00:06:54.140 08:25:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:54.140 08:25:32 -- scripts/common.sh@394 -- # pt= 00:06:54.140 08:25:32 -- scripts/common.sh@395 -- # return 1 00:06:54.140 08:25:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:54.140 1+0 records in 00:06:54.140 1+0 records out 00:06:54.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143742 s, 72.9 MB/s 00:06:54.140 08:25:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:54.140 08:25:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:54.140 08:25:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:54.140 08:25:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:54.140 08:25:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:54.140 No valid GPT data, bailing 00:06:54.140 08:25:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:54.140 08:25:33 -- scripts/common.sh@394 -- # pt= 00:06:54.140 08:25:33 -- scripts/common.sh@395 -- # return 1 00:06:54.140 08:25:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:54.140 1+0 records in 00:06:54.140 1+0 records out 00:06:54.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00581572 s, 180 MB/s 00:06:54.140 08:25:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:54.140 08:25:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:54.140 08:25:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:06:54.140 08:25:33 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:06:54.140 08:25:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:06:54.140 No valid GPT data, bailing 00:06:54.140 08:25:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:06:54.140 08:25:33 -- scripts/common.sh@394 -- # pt= 00:06:54.140 08:25:33 -- scripts/common.sh@395 -- # return 1 00:06:54.140 08:25:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:06:54.140 1+0 records in 00:06:54.140 1+0 records out 00:06:54.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467754 s, 224 MB/s 00:06:54.140 08:25:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:54.140 08:25:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:54.140 08:25:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:06:54.140 08:25:33 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:06:54.140 08:25:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:06:54.140 No valid GPT data, bailing 00:06:54.140 08:25:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:06:54.140 08:25:33 -- scripts/common.sh@394 -- # pt= 00:06:54.140 08:25:33 -- scripts/common.sh@395 -- # return 1 00:06:54.140 08:25:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:06:54.140 1+0 records in 00:06:54.140 1+0 records out 00:06:54.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435422 s, 241 MB/s 00:06:54.140 08:25:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:54.140 08:25:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:54.140 08:25:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:06:54.140 08:25:33 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:06:54.140 08:25:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:06:54.140 No valid GPT data, bailing 00:06:54.140 08:25:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:06:54.140 08:25:33 -- scripts/common.sh@394 -- # pt= 00:06:54.140 08:25:33 -- scripts/common.sh@395 -- # return 1 00:06:54.140 08:25:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:06:54.140 1+0 records in 00:06:54.140 1+0 records out 00:06:54.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444616 s, 236 MB/s 00:06:54.140 08:25:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:54.140 08:25:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:54.140 08:25:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:06:54.140 08:25:33 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:06:54.140 08:25:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:06:54.140 No valid GPT data, bailing 00:06:54.140 08:25:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:06:54.140 08:25:33 -- scripts/common.sh@394 -- # pt= 00:06:54.140 08:25:33 -- scripts/common.sh@395 -- # return 1 00:06:54.140 08:25:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:06:54.140 1+0 records in 00:06:54.140 1+0 records out 00:06:54.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00492407 s, 213 MB/s 00:06:54.140 08:25:33 -- spdk/autotest.sh@105 -- # sync 00:06:54.399 08:25:33 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:54.399 08:25:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:54.399 08:25:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:56.305 08:25:35 -- spdk/autotest.sh@111 -- # uname -s 00:06:56.305 08:25:35 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:56.305 08:25:35 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:56.305 08:25:35 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:56.870 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:57.437 Hugepages 00:06:57.437 node hugesize free / total 00:06:57.437 node0 1048576kB 0 / 0 00:06:57.437 node0 2048kB 0 / 0 00:06:57.437 00:06:57.437 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:57.437 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:57.437 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:57.437 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:57.696 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:06:57.696 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:06:57.696 08:25:36 -- spdk/autotest.sh@117 -- # uname -s 00:06:57.696 08:25:36 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:57.696 08:25:36 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:57.696 08:25:36 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:58.265 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:58.833 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:58.833 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:58.833 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:58.833 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:58.833 08:25:38 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:59.769 08:25:39 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:59.769 08:25:39 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:59.769 08:25:39 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:59.769 08:25:39 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:59.769 08:25:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:59.769 08:25:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:59.769 08:25:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:59.769 08:25:39 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:59.769 08:25:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:00.027 08:25:39 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:00.027 08:25:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:00.027 08:25:39 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:00.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:00.544 Waiting for block devices as requested 00:07:00.544 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:00.544 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:00.544 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:00.802 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:06.074 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:06.074 08:25:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:06.074 08:25:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:06.074 08:25:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:06.074 08:25:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:06.074 08:25:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:06.074 08:25:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:06.074 08:25:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:06.074 08:25:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:06.074 08:25:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:06.074 08:25:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:06.074 08:25:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:06.074 08:25:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:06.074 08:25:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:06.074 08:25:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:06.074 08:25:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:06.074 08:25:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:06.074 08:25:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:06.074 08:25:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:06.074 08:25:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:06.074 08:25:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:06.074 08:25:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:06.074 08:25:45 -- common/autotest_common.sh@1543 -- # continue 00:07:06.074 08:25:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:06.074 08:25:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:06.074 08:25:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:06.074 08:25:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:06.074 08:25:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:06.074 08:25:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:06.075 08:25:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:06.075 08:25:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:06.075 08:25:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:06.075 08:25:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:06.075 08:25:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:06.075 08:25:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:06.075 08:25:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:06.075 08:25:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:06.075 08:25:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:06.075 08:25:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:06.075 08:25:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:06.075 08:25:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:06.075 08:25:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:06.075 08:25:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:06.075 08:25:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:06.075 08:25:45 -- common/autotest_common.sh@1543 -- # continue 00:07:06.075 08:25:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:06.075 08:25:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:07:06.075 08:25:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:06.075 08:25:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:07:06.075 08:25:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:06.075 08:25:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:07:06.075 08:25:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:06.075 08:25:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:07:06.075 08:25:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:07:06.075 08:25:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:07:06.075 08:25:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:07:06.075 08:25:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:06.075 08:25:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:06.075 08:25:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:06.075 08:25:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:06.075 08:25:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:06.075 08:25:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:06.075 08:25:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:07:06.075 08:25:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:06.075 08:25:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:06.075 08:25:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:06.075 08:25:45 -- common/autotest_common.sh@1543 -- # continue 00:07:06.075 08:25:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:06.075 08:25:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:07:06.075 08:25:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:06.075 08:25:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:07:06.075 08:25:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:06.075 08:25:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:07:06.075 08:25:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:06.075 08:25:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:07:06.075 08:25:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:07:06.075 08:25:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:07:06.075 08:25:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:07:06.075 08:25:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:06.075 08:25:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:06.075 08:25:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:06.075 08:25:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:06.075 08:25:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:06.075 08:25:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:07:06.075 08:25:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:06.075 08:25:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:06.075 08:25:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:06.075 08:25:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:06.075 08:25:45 -- common/autotest_common.sh@1543 -- # continue 00:07:06.075 08:25:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:06.075 08:25:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.075 08:25:45 -- common/autotest_common.sh@10 -- # set +x 00:07:06.075 08:25:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:06.075 08:25:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.075 08:25:45 -- common/autotest_common.sh@10 -- # set +x 00:07:06.075 08:25:45 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:06.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:07.217 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:07.217 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:07.217 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:07.217 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:07.217 08:25:46 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:07.217 08:25:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:07.217 08:25:46 -- common/autotest_common.sh@10 -- # set +x 00:07:07.217 08:25:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:07.217 08:25:46 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:07.217 08:25:46 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:07.217 08:25:46 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:07.217 08:25:46 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:07.217 08:25:46 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:07.217 08:25:46 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:07.217 08:25:46 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:07.217 08:25:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:07.217 08:25:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:07.217 08:25:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:07.217 08:25:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:07.217 08:25:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:07.476 08:25:46 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:07.476 08:25:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:07.476 08:25:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:07.476 08:25:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:07.476 08:25:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:07.476 08:25:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:07.476 08:25:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:07.476 08:25:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:07.476 08:25:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:07.476 08:25:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:07.476 08:25:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:07.476 08:25:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:07:07.476 08:25:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:07.476 08:25:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:07.476 08:25:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:07.476 08:25:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:07:07.476 08:25:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:07.476 08:25:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:07.476 08:25:46 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:07.476 08:25:46 -- common/autotest_common.sh@1572 -- # return 0 00:07:07.476 08:25:46 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:07.476 08:25:46 -- common/autotest_common.sh@1580 -- # return 0 00:07:07.476 08:25:46 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:07.476 08:25:46 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:07.476 08:25:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:07.476 08:25:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:07.476 08:25:46 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:07.476 08:25:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.476 08:25:46 -- common/autotest_common.sh@10 -- # set +x 00:07:07.476 08:25:46 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:07.476 08:25:46 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:07.476 08:25:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.476 08:25:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.476 08:25:46 -- common/autotest_common.sh@10 -- # set +x 00:07:07.476 ************************************ 00:07:07.476 START TEST env 00:07:07.476 ************************************ 00:07:07.476 08:25:46 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:07.476 * Looking for test storage... 00:07:07.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:07.476 08:25:46 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.476 08:25:46 env -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.476 08:25:46 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.476 08:25:46 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.476 08:25:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.476 08:25:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.476 08:25:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.476 08:25:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.476 08:25:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.476 08:25:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.476 08:25:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.476 08:25:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.476 08:25:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.477 08:25:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.477 08:25:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.477 08:25:46 env -- scripts/common.sh@344 -- # case "$op" in 00:07:07.477 08:25:46 env -- scripts/common.sh@345 -- # : 1 00:07:07.477 08:25:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.477 08:25:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.477 08:25:46 env -- scripts/common.sh@365 -- # decimal 1 00:07:07.477 08:25:46 env -- scripts/common.sh@353 -- # local d=1 00:07:07.477 08:25:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.477 08:25:46 env -- scripts/common.sh@355 -- # echo 1 00:07:07.477 08:25:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.477 08:25:46 env -- scripts/common.sh@366 -- # decimal 2 00:07:07.477 08:25:46 env -- scripts/common.sh@353 -- # local d=2 00:07:07.477 08:25:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.477 08:25:46 env -- scripts/common.sh@355 -- # echo 2 00:07:07.477 08:25:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.477 08:25:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.477 08:25:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.477 08:25:46 env -- scripts/common.sh@368 -- # return 0 00:07:07.477 08:25:46 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.477 08:25:46 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.477 --rc genhtml_branch_coverage=1 00:07:07.477 --rc genhtml_function_coverage=1 00:07:07.477 --rc genhtml_legend=1 00:07:07.477 --rc geninfo_all_blocks=1 00:07:07.477 --rc geninfo_unexecuted_blocks=1 00:07:07.477 00:07:07.477 ' 00:07:07.477 08:25:46 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.477 --rc genhtml_branch_coverage=1 00:07:07.477 --rc genhtml_function_coverage=1 00:07:07.477 --rc genhtml_legend=1 00:07:07.477 --rc geninfo_all_blocks=1 00:07:07.477 --rc geninfo_unexecuted_blocks=1 00:07:07.477 00:07:07.477 ' 00:07:07.477 08:25:46 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.477 --rc genhtml_branch_coverage=1 00:07:07.477 --rc genhtml_function_coverage=1 00:07:07.477 --rc genhtml_legend=1 00:07:07.477 --rc geninfo_all_blocks=1 00:07:07.477 --rc geninfo_unexecuted_blocks=1 00:07:07.477 00:07:07.477 ' 00:07:07.477 08:25:46 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.477 --rc genhtml_branch_coverage=1 00:07:07.477 --rc genhtml_function_coverage=1 00:07:07.477 --rc genhtml_legend=1 00:07:07.477 --rc geninfo_all_blocks=1 00:07:07.477 --rc geninfo_unexecuted_blocks=1 00:07:07.477 00:07:07.477 ' 00:07:07.477 08:25:46 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:07.477 08:25:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.477 08:25:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.477 08:25:46 env -- common/autotest_common.sh@10 -- # set +x 00:07:07.757 ************************************ 00:07:07.757 START TEST env_memory 00:07:07.757 ************************************ 00:07:07.757 08:25:46 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:07.757 00:07:07.757 00:07:07.757 CUnit - A unit testing framework for C - Version 2.1-3 00:07:07.757 http://cunit.sourceforge.net/ 00:07:07.757 00:07:07.757 00:07:07.757 Suite: memory 00:07:07.757 Test: alloc and free memory map ...[2024-11-19 08:25:46.842938] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:07.757 passed 00:07:07.757 Test: mem map translation ...[2024-11-19 08:25:46.903647] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:07.757 [2024-11-19 08:25:46.903731] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:07.757 [2024-11-19 08:25:46.903830] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:07.757 [2024-11-19 08:25:46.903862] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:07.757 passed 00:07:07.757 Test: mem map registration ...[2024-11-19 08:25:46.986086] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:07.757 [2024-11-19 08:25:46.986158] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:07.757 passed 00:07:08.016 Test: mem map adjacent registrations ...passed 00:07:08.016 00:07:08.016 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.016 suites 1 1 n/a 0 0 00:07:08.016 tests 4 4 4 0 0 00:07:08.016 asserts 152 152 152 0 n/a 00:07:08.016 00:07:08.016 Elapsed time = 0.300 seconds 00:07:08.016 00:07:08.016 real 0m0.335s 00:07:08.016 user 0m0.310s 00:07:08.016 sys 0m0.019s 00:07:08.016 08:25:47 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.016 08:25:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:08.016 ************************************ 00:07:08.016 END TEST env_memory 00:07:08.016 ************************************ 00:07:08.016 08:25:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:08.016 08:25:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.016 08:25:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.016 08:25:47 env -- common/autotest_common.sh@10 -- # set +x 00:07:08.016 ************************************ 00:07:08.016 START TEST env_vtophys 00:07:08.016 ************************************ 00:07:08.016 08:25:47 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:08.016 EAL: lib.eal log level changed from notice to debug 00:07:08.016 EAL: Detected lcore 0 as core 0 on socket 0 00:07:08.016 EAL: Detected lcore 1 as core 0 on socket 0 00:07:08.016 EAL: Detected lcore 2 as core 0 on socket 0 00:07:08.016 EAL: Detected lcore 3 as core 0 on socket 0 00:07:08.016 EAL: Detected lcore 4 as core 0 on socket 0 00:07:08.016 EAL: Detected lcore 5 as core 0 on socket 0 00:07:08.016 EAL: Detected lcore 6 as core 0 on socket 0 00:07:08.016 EAL: Detected lcore 7 as core 0 on socket 0 00:07:08.016 EAL: Detected lcore 8 as core 0 on socket 0 00:07:08.016 EAL: Detected lcore 9 as core 0 on socket 0 00:07:08.016 EAL: Maximum logical cores by configuration: 128 00:07:08.016 EAL: Detected CPU lcores: 10 00:07:08.016 EAL: Detected NUMA nodes: 1 00:07:08.017 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:08.017 EAL: Detected shared linkage of DPDK 00:07:08.017 EAL: No shared files mode enabled, IPC will be disabled 00:07:08.017 EAL: Selected IOVA mode 'PA' 00:07:08.017 EAL: Probing VFIO support... 00:07:08.017 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:08.017 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:08.017 EAL: Ask a virtual area of 0x2e000 bytes 00:07:08.017 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:08.017 EAL: Setting up physically contiguous memory... 00:07:08.017 EAL: Setting maximum number of open files to 524288 00:07:08.017 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:08.017 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:08.017 EAL: Ask a virtual area of 0x61000 bytes 00:07:08.017 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:08.017 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:08.017 EAL: Ask a virtual area of 0x400000000 bytes 00:07:08.017 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:08.017 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:08.017 EAL: Ask a virtual area of 0x61000 bytes 00:07:08.017 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:08.017 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:08.017 EAL: Ask a virtual area of 0x400000000 bytes 00:07:08.017 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:08.017 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:08.017 EAL: Ask a virtual area of 0x61000 bytes 00:07:08.017 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:08.017 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:08.017 EAL: Ask a virtual area of 0x400000000 bytes 00:07:08.017 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:08.017 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:08.017 EAL: Ask a virtual area of 0x61000 bytes 00:07:08.017 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:08.017 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:08.017 EAL: Ask a virtual area of 0x400000000 bytes 00:07:08.017 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:08.017 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:08.017 EAL: Hugepages will be freed exactly as allocated. 00:07:08.017 EAL: No shared files mode enabled, IPC is disabled 00:07:08.017 EAL: No shared files mode enabled, IPC is disabled 00:07:08.275 EAL: TSC frequency is ~2200000 KHz 00:07:08.275 EAL: Main lcore 0 is ready (tid=7fe7b8ed8a40;cpuset=[0]) 00:07:08.275 EAL: Trying to obtain current memory policy. 00:07:08.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.275 EAL: Restoring previous memory policy: 0 00:07:08.275 EAL: request: mp_malloc_sync 00:07:08.275 EAL: No shared files mode enabled, IPC is disabled 00:07:08.275 EAL: Heap on socket 0 was expanded by 2MB 00:07:08.275 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:08.275 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:08.275 EAL: Mem event callback 'spdk:(nil)' registered 00:07:08.275 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:08.275 00:07:08.275 00:07:08.275 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.275 http://cunit.sourceforge.net/ 00:07:08.275 00:07:08.275 00:07:08.275 Suite: components_suite 00:07:08.534 Test: vtophys_malloc_test ...passed 00:07:08.534 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:08.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.534 EAL: Restoring previous memory policy: 4 00:07:08.534 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.534 EAL: request: mp_malloc_sync 00:07:08.534 EAL: No shared files mode enabled, IPC is disabled 00:07:08.534 EAL: Heap on socket 0 was expanded by 4MB 00:07:08.534 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.534 EAL: request: mp_malloc_sync 00:07:08.534 EAL: No shared files mode enabled, IPC is disabled 00:07:08.534 EAL: Heap on socket 0 was shrunk by 4MB 00:07:08.534 EAL: Trying to obtain current memory policy. 00:07:08.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.534 EAL: Restoring previous memory policy: 4 00:07:08.534 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.534 EAL: request: mp_malloc_sync 00:07:08.534 EAL: No shared files mode enabled, IPC is disabled 00:07:08.534 EAL: Heap on socket 0 was expanded by 6MB 00:07:08.793 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.793 EAL: request: mp_malloc_sync 00:07:08.793 EAL: No shared files mode enabled, IPC is disabled 00:07:08.793 EAL: Heap on socket 0 was shrunk by 6MB 00:07:08.793 EAL: Trying to obtain current memory policy. 00:07:08.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.793 EAL: Restoring previous memory policy: 4 00:07:08.793 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.793 EAL: request: mp_malloc_sync 00:07:08.793 EAL: No shared files mode enabled, IPC is disabled 00:07:08.793 EAL: Heap on socket 0 was expanded by 10MB 00:07:08.793 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.793 EAL: request: mp_malloc_sync 00:07:08.793 EAL: No shared files mode enabled, IPC is disabled 00:07:08.793 EAL: Heap on socket 0 was shrunk by 10MB 00:07:08.793 EAL: Trying to obtain current memory policy. 00:07:08.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.793 EAL: Restoring previous memory policy: 4 00:07:08.793 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.793 EAL: request: mp_malloc_sync 00:07:08.793 EAL: No shared files mode enabled, IPC is disabled 00:07:08.793 EAL: Heap on socket 0 was expanded by 18MB 00:07:08.793 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.793 EAL: request: mp_malloc_sync 00:07:08.793 EAL: No shared files mode enabled, IPC is disabled 00:07:08.793 EAL: Heap on socket 0 was shrunk by 18MB 00:07:08.793 EAL: Trying to obtain current memory policy. 00:07:08.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.793 EAL: Restoring previous memory policy: 4 00:07:08.793 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.793 EAL: request: mp_malloc_sync 00:07:08.793 EAL: No shared files mode enabled, IPC is disabled 00:07:08.793 EAL: Heap on socket 0 was expanded by 34MB 00:07:08.793 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.793 EAL: request: mp_malloc_sync 00:07:08.793 EAL: No shared files mode enabled, IPC is disabled 00:07:08.793 EAL: Heap on socket 0 was shrunk by 34MB 00:07:08.793 EAL: Trying to obtain current memory policy. 00:07:08.793 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:08.793 EAL: Restoring previous memory policy: 4 00:07:08.793 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.793 EAL: request: mp_malloc_sync 00:07:08.793 EAL: No shared files mode enabled, IPC is disabled 00:07:08.793 EAL: Heap on socket 0 was expanded by 66MB 00:07:09.052 EAL: Calling mem event callback 'spdk:(nil)' 00:07:09.052 EAL: request: mp_malloc_sync 00:07:09.052 EAL: No shared files mode enabled, IPC is disabled 00:07:09.052 EAL: Heap on socket 0 was shrunk by 66MB 00:07:09.052 EAL: Trying to obtain current memory policy. 00:07:09.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:09.052 EAL: Restoring previous memory policy: 4 00:07:09.052 EAL: Calling mem event callback 'spdk:(nil)' 00:07:09.052 EAL: request: mp_malloc_sync 00:07:09.052 EAL: No shared files mode enabled, IPC is disabled 00:07:09.052 EAL: Heap on socket 0 was expanded by 130MB 00:07:09.310 EAL: Calling mem event callback 'spdk:(nil)' 00:07:09.310 EAL: request: mp_malloc_sync 00:07:09.310 EAL: No shared files mode enabled, IPC is disabled 00:07:09.310 EAL: Heap on socket 0 was shrunk by 130MB 00:07:09.570 EAL: Trying to obtain current memory policy. 00:07:09.570 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:09.570 EAL: Restoring previous memory policy: 4 00:07:09.570 EAL: Calling mem event callback 'spdk:(nil)' 00:07:09.570 EAL: request: mp_malloc_sync 00:07:09.570 EAL: No shared files mode enabled, IPC is disabled 00:07:09.570 EAL: Heap on socket 0 was expanded by 258MB 00:07:09.828 EAL: Calling mem event callback 'spdk:(nil)' 00:07:09.828 EAL: request: mp_malloc_sync 00:07:09.828 EAL: No shared files mode enabled, IPC is disabled 00:07:09.828 EAL: Heap on socket 0 was shrunk by 258MB 00:07:10.401 EAL: Trying to obtain current memory policy. 00:07:10.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.401 EAL: Restoring previous memory policy: 4 00:07:10.401 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.401 EAL: request: mp_malloc_sync 00:07:10.401 EAL: No shared files mode enabled, IPC is disabled 00:07:10.401 EAL: Heap on socket 0 was expanded by 514MB 00:07:11.338 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.338 EAL: request: mp_malloc_sync 00:07:11.338 EAL: No shared files mode enabled, IPC is disabled 00:07:11.338 EAL: Heap on socket 0 was shrunk by 514MB 00:07:12.271 EAL: Trying to obtain current memory policy. 00:07:12.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:12.271 EAL: Restoring previous memory policy: 4 00:07:12.271 EAL: Calling mem event callback 'spdk:(nil)' 00:07:12.271 EAL: request: mp_malloc_sync 00:07:12.271 EAL: No shared files mode enabled, IPC is disabled 00:07:12.271 EAL: Heap on socket 0 was expanded by 1026MB 00:07:14.170 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.170 EAL: request: mp_malloc_sync 00:07:14.170 EAL: No shared files mode enabled, IPC is disabled 00:07:14.170 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:15.583 passed 00:07:15.583 00:07:15.583 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.583 suites 1 1 n/a 0 0 00:07:15.583 tests 2 2 2 0 0 00:07:15.583 asserts 5747 5747 5747 0 n/a 00:07:15.583 00:07:15.583 Elapsed time = 7.166 seconds 00:07:15.583 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.583 EAL: request: mp_malloc_sync 00:07:15.583 EAL: No shared files mode enabled, IPC is disabled 00:07:15.583 EAL: Heap on socket 0 was shrunk by 2MB 00:07:15.583 EAL: No shared files mode enabled, IPC is disabled 00:07:15.583 EAL: No shared files mode enabled, IPC is disabled 00:07:15.583 EAL: No shared files mode enabled, IPC is disabled 00:07:15.583 00:07:15.583 real 0m7.509s 00:07:15.583 user 0m6.594s 00:07:15.583 sys 0m0.745s 00:07:15.583 08:25:54 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.583 08:25:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:15.583 ************************************ 00:07:15.583 END TEST env_vtophys 00:07:15.583 ************************************ 00:07:15.583 08:25:54 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:15.583 08:25:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.583 08:25:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.583 08:25:54 env -- common/autotest_common.sh@10 -- # set +x 00:07:15.583 ************************************ 00:07:15.583 START TEST env_pci 00:07:15.583 ************************************ 00:07:15.583 08:25:54 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:15.583 00:07:15.583 00:07:15.583 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.583 http://cunit.sourceforge.net/ 00:07:15.583 00:07:15.583 00:07:15.583 Suite: pci 00:07:15.583 Test: pci_hook ...[2024-11-19 08:25:54.757544] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58157 has claimed it 00:07:15.583 passed 00:07:15.583 00:07:15.583 EAL: Cannot find device (10000:00:01.0) 00:07:15.583 EAL: Failed to attach device on primary process 00:07:15.583 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.583 suites 1 1 n/a 0 0 00:07:15.583 tests 1 1 1 0 0 00:07:15.583 asserts 25 25 25 0 n/a 00:07:15.583 00:07:15.583 Elapsed time = 0.009 seconds 00:07:15.583 00:07:15.583 real 0m0.088s 00:07:15.583 user 0m0.048s 00:07:15.583 sys 0m0.038s 00:07:15.583 08:25:54 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.583 08:25:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:15.583 ************************************ 00:07:15.583 END TEST env_pci 00:07:15.583 ************************************ 00:07:15.583 08:25:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:15.583 08:25:54 env -- env/env.sh@15 -- # uname 00:07:15.864 08:25:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:15.864 08:25:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:15.864 08:25:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:15.864 08:25:54 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:15.864 08:25:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.864 08:25:54 env -- common/autotest_common.sh@10 -- # set +x 00:07:15.864 ************************************ 00:07:15.864 START TEST env_dpdk_post_init 00:07:15.864 ************************************ 00:07:15.864 08:25:54 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:15.864 EAL: Detected CPU lcores: 10 00:07:15.864 EAL: Detected NUMA nodes: 1 00:07:15.864 EAL: Detected shared linkage of DPDK 00:07:15.864 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:15.864 EAL: Selected IOVA mode 'PA' 00:07:15.865 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:15.865 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:15.865 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:15.865 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:07:15.865 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:07:15.865 Starting DPDK initialization... 00:07:15.865 Starting SPDK post initialization... 00:07:15.865 SPDK NVMe probe 00:07:15.865 Attaching to 0000:00:10.0 00:07:15.865 Attaching to 0000:00:11.0 00:07:15.865 Attaching to 0000:00:12.0 00:07:15.865 Attaching to 0000:00:13.0 00:07:15.865 Attached to 0000:00:10.0 00:07:15.865 Attached to 0000:00:11.0 00:07:15.865 Attached to 0000:00:13.0 00:07:15.865 Attached to 0000:00:12.0 00:07:15.865 Cleaning up... 00:07:16.123 00:07:16.123 real 0m0.295s 00:07:16.123 user 0m0.105s 00:07:16.123 sys 0m0.092s 00:07:16.123 08:25:55 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.123 08:25:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:16.123 ************************************ 00:07:16.123 END TEST env_dpdk_post_init 00:07:16.123 ************************************ 00:07:16.123 08:25:55 env -- env/env.sh@26 -- # uname 00:07:16.123 08:25:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:16.123 08:25:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:16.123 08:25:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.123 08:25:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.123 08:25:55 env -- common/autotest_common.sh@10 -- # set +x 00:07:16.123 ************************************ 00:07:16.123 START TEST env_mem_callbacks 00:07:16.123 ************************************ 00:07:16.123 08:25:55 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:16.123 EAL: Detected CPU lcores: 10 00:07:16.123 EAL: Detected NUMA nodes: 1 00:07:16.123 EAL: Detected shared linkage of DPDK 00:07:16.123 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:16.123 EAL: Selected IOVA mode 'PA' 00:07:16.123 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:16.123 00:07:16.123 00:07:16.123 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.123 http://cunit.sourceforge.net/ 00:07:16.123 00:07:16.123 00:07:16.123 Suite: memory 00:07:16.123 Test: test ... 00:07:16.123 register 0x200000200000 2097152 00:07:16.123 malloc 3145728 00:07:16.123 register 0x200000400000 4194304 00:07:16.123 buf 0x2000004fffc0 len 3145728 PASSED 00:07:16.123 malloc 64 00:07:16.123 buf 0x2000004ffec0 len 64 PASSED 00:07:16.123 malloc 4194304 00:07:16.123 register 0x200000800000 6291456 00:07:16.123 buf 0x2000009fffc0 len 4194304 PASSED 00:07:16.123 free 0x2000004fffc0 3145728 00:07:16.123 free 0x2000004ffec0 64 00:07:16.123 unregister 0x200000400000 4194304 PASSED 00:07:16.123 free 0x2000009fffc0 4194304 00:07:16.123 unregister 0x200000800000 6291456 PASSED 00:07:16.382 malloc 8388608 00:07:16.382 register 0x200000400000 10485760 00:07:16.382 buf 0x2000005fffc0 len 8388608 PASSED 00:07:16.382 free 0x2000005fffc0 8388608 00:07:16.382 unregister 0x200000400000 10485760 PASSED 00:07:16.382 passed 00:07:16.382 00:07:16.382 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.382 suites 1 1 n/a 0 0 00:07:16.382 tests 1 1 1 0 0 00:07:16.382 asserts 15 15 15 0 n/a 00:07:16.382 00:07:16.382 Elapsed time = 0.063 seconds 00:07:16.382 00:07:16.382 real 0m0.255s 00:07:16.382 user 0m0.087s 00:07:16.382 sys 0m0.067s 00:07:16.382 08:25:55 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.382 08:25:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:16.382 ************************************ 00:07:16.382 END TEST env_mem_callbacks 00:07:16.382 ************************************ 00:07:16.382 00:07:16.382 real 0m8.936s 00:07:16.382 user 0m7.343s 00:07:16.382 sys 0m1.202s 00:07:16.382 08:25:55 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.382 08:25:55 env -- common/autotest_common.sh@10 -- # set +x 00:07:16.382 ************************************ 00:07:16.382 END TEST env 00:07:16.382 ************************************ 00:07:16.382 08:25:55 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:16.382 08:25:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.382 08:25:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.382 08:25:55 -- common/autotest_common.sh@10 -- # set +x 00:07:16.382 ************************************ 00:07:16.382 START TEST rpc 00:07:16.382 ************************************ 00:07:16.382 08:25:55 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:16.382 * Looking for test storage... 00:07:16.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:16.382 08:25:55 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.382 08:25:55 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.382 08:25:55 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.640 08:25:55 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.640 08:25:55 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.640 08:25:55 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.640 08:25:55 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.640 08:25:55 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.640 08:25:55 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.640 08:25:55 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.640 08:25:55 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.640 08:25:55 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.641 08:25:55 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.641 08:25:55 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.641 08:25:55 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.641 08:25:55 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:16.641 08:25:55 rpc -- scripts/common.sh@345 -- # : 1 00:07:16.641 08:25:55 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.641 08:25:55 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.641 08:25:55 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:16.641 08:25:55 rpc -- scripts/common.sh@353 -- # local d=1 00:07:16.641 08:25:55 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.641 08:25:55 rpc -- scripts/common.sh@355 -- # echo 1 00:07:16.641 08:25:55 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.641 08:25:55 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:16.641 08:25:55 rpc -- scripts/common.sh@353 -- # local d=2 00:07:16.641 08:25:55 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.641 08:25:55 rpc -- scripts/common.sh@355 -- # echo 2 00:07:16.641 08:25:55 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.641 08:25:55 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.641 08:25:55 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.641 08:25:55 rpc -- scripts/common.sh@368 -- # return 0 00:07:16.641 08:25:55 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.641 08:25:55 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.641 --rc genhtml_branch_coverage=1 00:07:16.641 --rc genhtml_function_coverage=1 00:07:16.641 --rc genhtml_legend=1 00:07:16.641 --rc geninfo_all_blocks=1 00:07:16.641 --rc geninfo_unexecuted_blocks=1 00:07:16.641 00:07:16.641 ' 00:07:16.641 08:25:55 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.641 --rc genhtml_branch_coverage=1 00:07:16.641 --rc genhtml_function_coverage=1 00:07:16.641 --rc genhtml_legend=1 00:07:16.641 --rc geninfo_all_blocks=1 00:07:16.641 --rc geninfo_unexecuted_blocks=1 00:07:16.641 00:07:16.641 ' 00:07:16.641 08:25:55 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.641 --rc genhtml_branch_coverage=1 00:07:16.641 --rc genhtml_function_coverage=1 00:07:16.641 --rc genhtml_legend=1 00:07:16.641 --rc geninfo_all_blocks=1 00:07:16.641 --rc geninfo_unexecuted_blocks=1 00:07:16.641 00:07:16.641 ' 00:07:16.641 08:25:55 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.641 --rc genhtml_branch_coverage=1 00:07:16.641 --rc genhtml_function_coverage=1 00:07:16.641 --rc genhtml_legend=1 00:07:16.641 --rc geninfo_all_blocks=1 00:07:16.641 --rc geninfo_unexecuted_blocks=1 00:07:16.641 00:07:16.641 ' 00:07:16.641 08:25:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58284 00:07:16.641 08:25:55 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:16.641 08:25:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:16.641 08:25:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58284 00:07:16.641 08:25:55 rpc -- common/autotest_common.sh@835 -- # '[' -z 58284 ']' 00:07:16.641 08:25:55 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.641 08:25:55 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.641 08:25:55 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.641 08:25:55 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.641 08:25:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.641 [2024-11-19 08:25:55.872574] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:16.641 [2024-11-19 08:25:55.872737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58284 ] 00:07:16.900 [2024-11-19 08:25:56.058923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.159 [2024-11-19 08:25:56.193097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:17.159 [2024-11-19 08:25:56.193181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58284' to capture a snapshot of events at runtime. 00:07:17.159 [2024-11-19 08:25:56.193201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.159 [2024-11-19 08:25:56.193229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.159 [2024-11-19 08:25:56.193245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58284 for offline analysis/debug. 00:07:17.159 [2024-11-19 08:25:56.194705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.095 08:25:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.095 08:25:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:18.095 08:25:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:18.095 08:25:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:18.095 08:25:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:18.095 08:25:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:18.095 08:25:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.095 08:25:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.095 08:25:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.095 ************************************ 00:07:18.095 START TEST rpc_integrity 00:07:18.095 ************************************ 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:18.095 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.095 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:18.095 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:18.095 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:18.095 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.095 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:18.095 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.095 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:18.095 { 00:07:18.095 "name": "Malloc0", 00:07:18.095 "aliases": [ 00:07:18.095 "ec35f82e-97bc-4a07-87e7-a4789b3b7e98" 00:07:18.095 ], 00:07:18.095 "product_name": "Malloc disk", 00:07:18.095 "block_size": 512, 00:07:18.095 "num_blocks": 16384, 00:07:18.095 "uuid": "ec35f82e-97bc-4a07-87e7-a4789b3b7e98", 00:07:18.095 "assigned_rate_limits": { 00:07:18.095 "rw_ios_per_sec": 0, 00:07:18.095 "rw_mbytes_per_sec": 0, 00:07:18.095 "r_mbytes_per_sec": 0, 00:07:18.095 "w_mbytes_per_sec": 0 00:07:18.095 }, 00:07:18.095 "claimed": false, 00:07:18.095 "zoned": false, 00:07:18.095 "supported_io_types": { 00:07:18.095 "read": true, 00:07:18.095 "write": true, 00:07:18.095 "unmap": true, 00:07:18.095 "flush": true, 00:07:18.095 "reset": true, 00:07:18.095 "nvme_admin": false, 00:07:18.095 "nvme_io": false, 00:07:18.095 "nvme_io_md": false, 00:07:18.095 "write_zeroes": true, 00:07:18.095 "zcopy": true, 00:07:18.095 "get_zone_info": false, 00:07:18.095 "zone_management": false, 00:07:18.095 "zone_append": false, 00:07:18.095 "compare": false, 00:07:18.095 "compare_and_write": false, 00:07:18.095 "abort": true, 00:07:18.095 "seek_hole": false, 00:07:18.095 "seek_data": false, 00:07:18.095 "copy": true, 00:07:18.095 "nvme_iov_md": false 00:07:18.095 }, 00:07:18.095 "memory_domains": [ 00:07:18.095 { 00:07:18.095 "dma_device_id": "system", 00:07:18.095 "dma_device_type": 1 00:07:18.095 }, 00:07:18.095 { 00:07:18.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.095 "dma_device_type": 2 00:07:18.095 } 00:07:18.095 ], 00:07:18.095 "driver_specific": {} 00:07:18.095 } 00:07:18.095 ]' 00:07:18.095 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:18.095 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:18.095 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.095 [2024-11-19 08:25:57.220428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:18.095 [2024-11-19 08:25:57.220516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.095 [2024-11-19 08:25:57.220571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:18.095 [2024-11-19 08:25:57.220595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.095 [2024-11-19 08:25:57.223576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.095 [2024-11-19 08:25:57.223778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:18.095 Passthru0 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.095 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.095 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.095 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:18.095 { 00:07:18.095 "name": "Malloc0", 00:07:18.095 "aliases": [ 00:07:18.095 "ec35f82e-97bc-4a07-87e7-a4789b3b7e98" 00:07:18.095 ], 00:07:18.095 "product_name": "Malloc disk", 00:07:18.095 "block_size": 512, 00:07:18.095 "num_blocks": 16384, 00:07:18.095 "uuid": "ec35f82e-97bc-4a07-87e7-a4789b3b7e98", 00:07:18.095 "assigned_rate_limits": { 00:07:18.095 "rw_ios_per_sec": 0, 00:07:18.095 "rw_mbytes_per_sec": 0, 00:07:18.095 "r_mbytes_per_sec": 0, 00:07:18.095 "w_mbytes_per_sec": 0 00:07:18.095 }, 00:07:18.095 "claimed": true, 00:07:18.095 "claim_type": "exclusive_write", 00:07:18.095 "zoned": false, 00:07:18.095 "supported_io_types": { 00:07:18.095 "read": true, 00:07:18.096 "write": true, 00:07:18.096 "unmap": true, 00:07:18.096 "flush": true, 00:07:18.096 "reset": true, 00:07:18.096 "nvme_admin": false, 00:07:18.096 "nvme_io": false, 00:07:18.096 "nvme_io_md": false, 00:07:18.096 "write_zeroes": true, 00:07:18.096 "zcopy": true, 00:07:18.096 "get_zone_info": false, 00:07:18.096 "zone_management": false, 00:07:18.096 "zone_append": false, 00:07:18.096 "compare": false, 00:07:18.096 "compare_and_write": false, 00:07:18.096 "abort": true, 00:07:18.096 "seek_hole": false, 00:07:18.096 "seek_data": false, 00:07:18.096 "copy": true, 00:07:18.096 "nvme_iov_md": false 00:07:18.096 }, 00:07:18.096 "memory_domains": [ 00:07:18.096 { 00:07:18.096 "dma_device_id": "system", 00:07:18.096 "dma_device_type": 1 00:07:18.096 }, 00:07:18.096 { 00:07:18.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.096 "dma_device_type": 2 00:07:18.096 } 00:07:18.096 ], 00:07:18.096 "driver_specific": {} 00:07:18.096 }, 00:07:18.096 { 00:07:18.096 "name": "Passthru0", 00:07:18.096 "aliases": [ 00:07:18.096 "ad7f63a2-5298-5aba-9f21-fe0cb764b643" 00:07:18.096 ], 00:07:18.096 "product_name": "passthru", 00:07:18.096 "block_size": 512, 00:07:18.096 "num_blocks": 16384, 00:07:18.096 "uuid": "ad7f63a2-5298-5aba-9f21-fe0cb764b643", 00:07:18.096 "assigned_rate_limits": { 00:07:18.096 "rw_ios_per_sec": 0, 00:07:18.096 "rw_mbytes_per_sec": 0, 00:07:18.096 "r_mbytes_per_sec": 0, 00:07:18.096 "w_mbytes_per_sec": 0 00:07:18.096 }, 00:07:18.096 "claimed": false, 00:07:18.096 "zoned": false, 00:07:18.096 "supported_io_types": { 00:07:18.096 "read": true, 00:07:18.096 "write": true, 00:07:18.096 "unmap": true, 00:07:18.096 "flush": true, 00:07:18.096 "reset": true, 00:07:18.096 "nvme_admin": false, 00:07:18.096 "nvme_io": false, 00:07:18.096 "nvme_io_md": false, 00:07:18.096 "write_zeroes": true, 00:07:18.096 "zcopy": true, 00:07:18.096 "get_zone_info": false, 00:07:18.096 "zone_management": false, 00:07:18.096 "zone_append": false, 00:07:18.096 "compare": false, 00:07:18.096 "compare_and_write": false, 00:07:18.096 "abort": true, 00:07:18.096 "seek_hole": false, 00:07:18.096 "seek_data": false, 00:07:18.096 "copy": true, 00:07:18.096 "nvme_iov_md": false 00:07:18.096 }, 00:07:18.096 "memory_domains": [ 00:07:18.096 { 00:07:18.096 "dma_device_id": "system", 00:07:18.096 "dma_device_type": 1 00:07:18.096 }, 00:07:18.096 { 00:07:18.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.096 "dma_device_type": 2 00:07:18.096 } 00:07:18.096 ], 00:07:18.096 "driver_specific": { 00:07:18.096 "passthru": { 00:07:18.096 "name": "Passthru0", 00:07:18.096 "base_bdev_name": "Malloc0" 00:07:18.096 } 00:07:18.096 } 00:07:18.096 } 00:07:18.096 ]' 00:07:18.096 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:18.096 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:18.096 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:18.096 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.096 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.096 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.096 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:18.096 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.096 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.096 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.096 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:18.096 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.096 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.096 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.096 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:18.096 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:18.355 ************************************ 00:07:18.355 END TEST rpc_integrity 00:07:18.355 ************************************ 00:07:18.355 08:25:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:18.355 00:07:18.355 real 0m0.371s 00:07:18.355 user 0m0.229s 00:07:18.355 sys 0m0.041s 00:07:18.355 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.355 08:25:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.355 08:25:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:18.355 08:25:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.355 08:25:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.355 08:25:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.355 ************************************ 00:07:18.355 START TEST rpc_plugins 00:07:18.355 ************************************ 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:18.355 08:25:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.355 08:25:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:18.355 08:25:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.355 08:25:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:18.355 { 00:07:18.355 "name": "Malloc1", 00:07:18.355 "aliases": [ 00:07:18.355 "9a1bf2b5-d137-4002-9195-42babc489fca" 00:07:18.355 ], 00:07:18.355 "product_name": "Malloc disk", 00:07:18.355 "block_size": 4096, 00:07:18.355 "num_blocks": 256, 00:07:18.355 "uuid": "9a1bf2b5-d137-4002-9195-42babc489fca", 00:07:18.355 "assigned_rate_limits": { 00:07:18.355 "rw_ios_per_sec": 0, 00:07:18.355 "rw_mbytes_per_sec": 0, 00:07:18.355 "r_mbytes_per_sec": 0, 00:07:18.355 "w_mbytes_per_sec": 0 00:07:18.355 }, 00:07:18.355 "claimed": false, 00:07:18.355 "zoned": false, 00:07:18.355 "supported_io_types": { 00:07:18.355 "read": true, 00:07:18.355 "write": true, 00:07:18.355 "unmap": true, 00:07:18.355 "flush": true, 00:07:18.355 "reset": true, 00:07:18.355 "nvme_admin": false, 00:07:18.355 "nvme_io": false, 00:07:18.355 "nvme_io_md": false, 00:07:18.355 "write_zeroes": true, 00:07:18.355 "zcopy": true, 00:07:18.355 "get_zone_info": false, 00:07:18.355 "zone_management": false, 00:07:18.355 "zone_append": false, 00:07:18.355 "compare": false, 00:07:18.355 "compare_and_write": false, 00:07:18.355 "abort": true, 00:07:18.355 "seek_hole": false, 00:07:18.355 "seek_data": false, 00:07:18.355 "copy": true, 00:07:18.355 "nvme_iov_md": false 00:07:18.355 }, 00:07:18.355 "memory_domains": [ 00:07:18.355 { 00:07:18.355 "dma_device_id": "system", 00:07:18.355 "dma_device_type": 1 00:07:18.355 }, 00:07:18.355 { 00:07:18.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.355 "dma_device_type": 2 00:07:18.355 } 00:07:18.355 ], 00:07:18.355 "driver_specific": {} 00:07:18.355 } 00:07:18.355 ]' 00:07:18.355 08:25:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:18.355 08:25:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:18.355 08:25:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.355 08:25:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.355 08:25:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:18.355 08:25:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:18.355 ************************************ 00:07:18.355 END TEST rpc_plugins 00:07:18.355 ************************************ 00:07:18.355 08:25:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:18.355 00:07:18.355 real 0m0.170s 00:07:18.355 user 0m0.108s 00:07:18.355 sys 0m0.018s 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.355 08:25:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:18.614 08:25:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:18.614 08:25:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.614 08:25:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.614 08:25:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.614 ************************************ 00:07:18.614 START TEST rpc_trace_cmd_test 00:07:18.614 ************************************ 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:18.614 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58284", 00:07:18.614 "tpoint_group_mask": "0x8", 00:07:18.614 "iscsi_conn": { 00:07:18.614 "mask": "0x2", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "scsi": { 00:07:18.614 "mask": "0x4", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "bdev": { 00:07:18.614 "mask": "0x8", 00:07:18.614 "tpoint_mask": "0xffffffffffffffff" 00:07:18.614 }, 00:07:18.614 "nvmf_rdma": { 00:07:18.614 "mask": "0x10", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "nvmf_tcp": { 00:07:18.614 "mask": "0x20", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "ftl": { 00:07:18.614 "mask": "0x40", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "blobfs": { 00:07:18.614 "mask": "0x80", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "dsa": { 00:07:18.614 "mask": "0x200", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "thread": { 00:07:18.614 "mask": "0x400", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "nvme_pcie": { 00:07:18.614 "mask": "0x800", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "iaa": { 00:07:18.614 "mask": "0x1000", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "nvme_tcp": { 00:07:18.614 "mask": "0x2000", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "bdev_nvme": { 00:07:18.614 "mask": "0x4000", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "sock": { 00:07:18.614 "mask": "0x8000", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "blob": { 00:07:18.614 "mask": "0x10000", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "bdev_raid": { 00:07:18.614 "mask": "0x20000", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 }, 00:07:18.614 "scheduler": { 00:07:18.614 "mask": "0x40000", 00:07:18.614 "tpoint_mask": "0x0" 00:07:18.614 } 00:07:18.614 }' 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:18.614 08:25:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:18.873 08:25:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:18.873 08:25:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:18.873 ************************************ 00:07:18.873 END TEST rpc_trace_cmd_test 00:07:18.873 ************************************ 00:07:18.873 08:25:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:18.873 00:07:18.873 real 0m0.297s 00:07:18.873 user 0m0.259s 00:07:18.873 sys 0m0.027s 00:07:18.873 08:25:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.873 08:25:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.873 08:25:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:18.873 08:25:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:18.873 08:25:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:18.873 08:25:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.873 08:25:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.873 08:25:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.873 ************************************ 00:07:18.873 START TEST rpc_daemon_integrity 00:07:18.873 ************************************ 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:18.873 { 00:07:18.873 "name": "Malloc2", 00:07:18.873 "aliases": [ 00:07:18.873 "77282bf3-4380-4cdb-87f0-81144abecca9" 00:07:18.873 ], 00:07:18.873 "product_name": "Malloc disk", 00:07:18.873 "block_size": 512, 00:07:18.873 "num_blocks": 16384, 00:07:18.873 "uuid": "77282bf3-4380-4cdb-87f0-81144abecca9", 00:07:18.873 "assigned_rate_limits": { 00:07:18.873 "rw_ios_per_sec": 0, 00:07:18.873 "rw_mbytes_per_sec": 0, 00:07:18.873 "r_mbytes_per_sec": 0, 00:07:18.873 "w_mbytes_per_sec": 0 00:07:18.873 }, 00:07:18.873 "claimed": false, 00:07:18.873 "zoned": false, 00:07:18.873 "supported_io_types": { 00:07:18.873 "read": true, 00:07:18.873 "write": true, 00:07:18.873 "unmap": true, 00:07:18.873 "flush": true, 00:07:18.873 "reset": true, 00:07:18.873 "nvme_admin": false, 00:07:18.873 "nvme_io": false, 00:07:18.873 "nvme_io_md": false, 00:07:18.873 "write_zeroes": true, 00:07:18.873 "zcopy": true, 00:07:18.873 "get_zone_info": false, 00:07:18.873 "zone_management": false, 00:07:18.873 "zone_append": false, 00:07:18.873 "compare": false, 00:07:18.873 "compare_and_write": false, 00:07:18.873 "abort": true, 00:07:18.873 "seek_hole": false, 00:07:18.873 "seek_data": false, 00:07:18.873 "copy": true, 00:07:18.873 "nvme_iov_md": false 00:07:18.873 }, 00:07:18.873 "memory_domains": [ 00:07:18.873 { 00:07:18.873 "dma_device_id": "system", 00:07:18.873 "dma_device_type": 1 00:07:18.873 }, 00:07:18.873 { 00:07:18.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.873 "dma_device_type": 2 00:07:18.873 } 00:07:18.873 ], 00:07:18.873 "driver_specific": {} 00:07:18.873 } 00:07:18.873 ]' 00:07:18.873 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.132 [2024-11-19 08:25:58.215102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:19.132 [2024-11-19 08:25:58.215199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.132 [2024-11-19 08:25:58.215231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:19.132 [2024-11-19 08:25:58.215248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.132 [2024-11-19 08:25:58.218378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.132 [2024-11-19 08:25:58.218445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:19.132 Passthru0 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:19.132 { 00:07:19.132 "name": "Malloc2", 00:07:19.132 "aliases": [ 00:07:19.132 "77282bf3-4380-4cdb-87f0-81144abecca9" 00:07:19.132 ], 00:07:19.132 "product_name": "Malloc disk", 00:07:19.132 "block_size": 512, 00:07:19.132 "num_blocks": 16384, 00:07:19.132 "uuid": "77282bf3-4380-4cdb-87f0-81144abecca9", 00:07:19.132 "assigned_rate_limits": { 00:07:19.132 "rw_ios_per_sec": 0, 00:07:19.132 "rw_mbytes_per_sec": 0, 00:07:19.132 "r_mbytes_per_sec": 0, 00:07:19.132 "w_mbytes_per_sec": 0 00:07:19.132 }, 00:07:19.132 "claimed": true, 00:07:19.132 "claim_type": "exclusive_write", 00:07:19.132 "zoned": false, 00:07:19.132 "supported_io_types": { 00:07:19.132 "read": true, 00:07:19.132 "write": true, 00:07:19.132 "unmap": true, 00:07:19.132 "flush": true, 00:07:19.132 "reset": true, 00:07:19.132 "nvme_admin": false, 00:07:19.132 "nvme_io": false, 00:07:19.132 "nvme_io_md": false, 00:07:19.132 "write_zeroes": true, 00:07:19.132 "zcopy": true, 00:07:19.132 "get_zone_info": false, 00:07:19.132 "zone_management": false, 00:07:19.132 "zone_append": false, 00:07:19.132 "compare": false, 00:07:19.132 "compare_and_write": false, 00:07:19.132 "abort": true, 00:07:19.132 "seek_hole": false, 00:07:19.132 "seek_data": false, 00:07:19.132 "copy": true, 00:07:19.132 "nvme_iov_md": false 00:07:19.132 }, 00:07:19.132 "memory_domains": [ 00:07:19.132 { 00:07:19.132 "dma_device_id": "system", 00:07:19.132 "dma_device_type": 1 00:07:19.132 }, 00:07:19.132 { 00:07:19.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.132 "dma_device_type": 2 00:07:19.132 } 00:07:19.132 ], 00:07:19.132 "driver_specific": {} 00:07:19.132 }, 00:07:19.132 { 00:07:19.132 "name": "Passthru0", 00:07:19.132 "aliases": [ 00:07:19.132 "fe73974b-c4c3-58c1-a6c7-024fa046683f" 00:07:19.132 ], 00:07:19.132 "product_name": "passthru", 00:07:19.132 "block_size": 512, 00:07:19.132 "num_blocks": 16384, 00:07:19.132 "uuid": "fe73974b-c4c3-58c1-a6c7-024fa046683f", 00:07:19.132 "assigned_rate_limits": { 00:07:19.132 "rw_ios_per_sec": 0, 00:07:19.132 "rw_mbytes_per_sec": 0, 00:07:19.132 "r_mbytes_per_sec": 0, 00:07:19.132 "w_mbytes_per_sec": 0 00:07:19.132 }, 00:07:19.132 "claimed": false, 00:07:19.132 "zoned": false, 00:07:19.132 "supported_io_types": { 00:07:19.132 "read": true, 00:07:19.132 "write": true, 00:07:19.132 "unmap": true, 00:07:19.132 "flush": true, 00:07:19.132 "reset": true, 00:07:19.132 "nvme_admin": false, 00:07:19.132 "nvme_io": false, 00:07:19.132 "nvme_io_md": false, 00:07:19.132 "write_zeroes": true, 00:07:19.132 "zcopy": true, 00:07:19.132 "get_zone_info": false, 00:07:19.132 "zone_management": false, 00:07:19.132 "zone_append": false, 00:07:19.132 "compare": false, 00:07:19.132 "compare_and_write": false, 00:07:19.132 "abort": true, 00:07:19.132 "seek_hole": false, 00:07:19.132 "seek_data": false, 00:07:19.132 "copy": true, 00:07:19.132 "nvme_iov_md": false 00:07:19.132 }, 00:07:19.132 "memory_domains": [ 00:07:19.132 { 00:07:19.132 "dma_device_id": "system", 00:07:19.132 "dma_device_type": 1 00:07:19.132 }, 00:07:19.132 { 00:07:19.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.132 "dma_device_type": 2 00:07:19.132 } 00:07:19.132 ], 00:07:19.132 "driver_specific": { 00:07:19.132 "passthru": { 00:07:19.132 "name": "Passthru0", 00:07:19.132 "base_bdev_name": "Malloc2" 00:07:19.132 } 00:07:19.132 } 00:07:19.132 } 00:07:19.132 ]' 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:19.132 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.133 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.133 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.133 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:19.133 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:19.133 ************************************ 00:07:19.133 END TEST rpc_daemon_integrity 00:07:19.133 ************************************ 00:07:19.133 08:25:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:19.133 00:07:19.133 real 0m0.365s 00:07:19.133 user 0m0.232s 00:07:19.133 sys 0m0.039s 00:07:19.133 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.133 08:25:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:19.391 08:25:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:19.391 08:25:58 rpc -- rpc/rpc.sh@84 -- # killprocess 58284 00:07:19.391 08:25:58 rpc -- common/autotest_common.sh@954 -- # '[' -z 58284 ']' 00:07:19.391 08:25:58 rpc -- common/autotest_common.sh@958 -- # kill -0 58284 00:07:19.391 08:25:58 rpc -- common/autotest_common.sh@959 -- # uname 00:07:19.391 08:25:58 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.391 08:25:58 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58284 00:07:19.391 killing process with pid 58284 00:07:19.391 08:25:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.391 08:25:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.391 08:25:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58284' 00:07:19.391 08:25:58 rpc -- common/autotest_common.sh@973 -- # kill 58284 00:07:19.391 08:25:58 rpc -- common/autotest_common.sh@978 -- # wait 58284 00:07:21.928 ************************************ 00:07:21.928 END TEST rpc 00:07:21.928 ************************************ 00:07:21.928 00:07:21.928 real 0m5.093s 00:07:21.928 user 0m6.028s 00:07:21.928 sys 0m0.780s 00:07:21.928 08:26:00 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.928 08:26:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 08:26:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:21.928 08:26:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.928 08:26:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.928 08:26:00 -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 ************************************ 00:07:21.928 START TEST skip_rpc 00:07:21.928 ************************************ 00:07:21.928 08:26:00 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:21.928 * Looking for test storage... 00:07:21.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:21.928 08:26:00 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.928 08:26:00 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.928 08:26:00 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.928 08:26:00 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.928 08:26:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:21.928 08:26:00 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.928 08:26:00 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.928 --rc genhtml_branch_coverage=1 00:07:21.928 --rc genhtml_function_coverage=1 00:07:21.928 --rc genhtml_legend=1 00:07:21.928 --rc geninfo_all_blocks=1 00:07:21.928 --rc geninfo_unexecuted_blocks=1 00:07:21.928 00:07:21.928 ' 00:07:21.928 08:26:00 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.928 --rc genhtml_branch_coverage=1 00:07:21.928 --rc genhtml_function_coverage=1 00:07:21.928 --rc genhtml_legend=1 00:07:21.928 --rc geninfo_all_blocks=1 00:07:21.928 --rc geninfo_unexecuted_blocks=1 00:07:21.928 00:07:21.928 ' 00:07:21.928 08:26:00 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.928 --rc genhtml_branch_coverage=1 00:07:21.928 --rc genhtml_function_coverage=1 00:07:21.928 --rc genhtml_legend=1 00:07:21.928 --rc geninfo_all_blocks=1 00:07:21.928 --rc geninfo_unexecuted_blocks=1 00:07:21.928 00:07:21.928 ' 00:07:21.928 08:26:00 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.928 --rc genhtml_branch_coverage=1 00:07:21.928 --rc genhtml_function_coverage=1 00:07:21.928 --rc genhtml_legend=1 00:07:21.928 --rc geninfo_all_blocks=1 00:07:21.928 --rc geninfo_unexecuted_blocks=1 00:07:21.928 00:07:21.928 ' 00:07:21.928 08:26:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:21.928 08:26:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:21.928 08:26:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:21.928 08:26:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.928 08:26:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.928 08:26:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 ************************************ 00:07:21.928 START TEST skip_rpc 00:07:21.928 ************************************ 00:07:21.928 08:26:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:21.928 08:26:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58513 00:07:21.928 08:26:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:21.928 08:26:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:21.928 08:26:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:21.928 [2024-11-19 08:26:01.049652] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:21.928 [2024-11-19 08:26:01.049983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58513 ] 00:07:22.187 [2024-11-19 08:26:01.235571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.187 [2024-11-19 08:26:01.344500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58513 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58513 ']' 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58513 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58513 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58513' 00:07:27.457 killing process with pid 58513 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58513 00:07:27.457 08:26:05 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58513 00:07:28.851 00:07:28.851 ************************************ 00:07:28.851 END TEST skip_rpc 00:07:28.851 ************************************ 00:07:28.851 real 0m7.093s 00:07:28.851 user 0m6.655s 00:07:28.851 sys 0m0.334s 00:07:28.851 08:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.851 08:26:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.851 08:26:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:28.851 08:26:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.851 08:26:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.851 08:26:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.851 ************************************ 00:07:28.851 START TEST skip_rpc_with_json 00:07:28.851 ************************************ 00:07:28.851 08:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:28.851 08:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:28.851 08:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58617 00:07:28.851 08:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:28.851 08:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:28.851 08:26:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58617 00:07:28.851 08:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58617 ']' 00:07:28.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.851 08:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.851 08:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.851 08:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.851 08:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.851 08:26:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:29.114 [2024-11-19 08:26:08.195207] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:29.114 [2024-11-19 08:26:08.195381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58617 ] 00:07:29.114 [2024-11-19 08:26:08.371849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.373 [2024-11-19 08:26:08.473888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.942 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.942 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:29.942 08:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:29.942 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.942 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:29.942 [2024-11-19 08:26:09.228168] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:30.200 request: 00:07:30.200 { 00:07:30.200 "trtype": "tcp", 00:07:30.200 "method": "nvmf_get_transports", 00:07:30.200 "req_id": 1 00:07:30.200 } 00:07:30.200 Got JSON-RPC error response 00:07:30.200 response: 00:07:30.200 { 00:07:30.200 "code": -19, 00:07:30.200 "message": "No such device" 00:07:30.200 } 00:07:30.200 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:30.200 08:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:30.200 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.200 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:30.200 [2024-11-19 08:26:09.240304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.200 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.200 08:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:30.200 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.200 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:30.200 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.200 08:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:30.200 { 00:07:30.200 "subsystems": [ 00:07:30.200 { 00:07:30.200 "subsystem": "fsdev", 00:07:30.200 "config": [ 00:07:30.200 { 00:07:30.200 "method": "fsdev_set_opts", 00:07:30.200 "params": { 00:07:30.200 "fsdev_io_pool_size": 65535, 00:07:30.200 "fsdev_io_cache_size": 256 00:07:30.200 } 00:07:30.200 } 00:07:30.200 ] 00:07:30.200 }, 00:07:30.200 { 00:07:30.200 "subsystem": "keyring", 00:07:30.200 "config": [] 00:07:30.200 }, 00:07:30.200 { 00:07:30.200 "subsystem": "iobuf", 00:07:30.200 "config": [ 00:07:30.200 { 00:07:30.200 "method": "iobuf_set_options", 00:07:30.200 "params": { 00:07:30.200 "small_pool_count": 8192, 00:07:30.200 "large_pool_count": 1024, 00:07:30.200 "small_bufsize": 8192, 00:07:30.200 "large_bufsize": 135168, 00:07:30.200 "enable_numa": false 00:07:30.200 } 00:07:30.200 } 00:07:30.200 ] 00:07:30.200 }, 00:07:30.200 { 00:07:30.200 "subsystem": "sock", 00:07:30.200 "config": [ 00:07:30.200 { 00:07:30.200 "method": "sock_set_default_impl", 00:07:30.200 "params": { 00:07:30.200 "impl_name": "posix" 00:07:30.200 } 00:07:30.200 }, 00:07:30.200 { 00:07:30.200 "method": "sock_impl_set_options", 00:07:30.200 "params": { 00:07:30.200 "impl_name": "ssl", 00:07:30.200 "recv_buf_size": 4096, 00:07:30.200 "send_buf_size": 4096, 00:07:30.200 "enable_recv_pipe": true, 00:07:30.200 "enable_quickack": false, 00:07:30.200 "enable_placement_id": 0, 00:07:30.200 "enable_zerocopy_send_server": true, 00:07:30.200 "enable_zerocopy_send_client": false, 00:07:30.200 "zerocopy_threshold": 0, 00:07:30.200 "tls_version": 0, 00:07:30.200 "enable_ktls": false 00:07:30.200 } 00:07:30.200 }, 00:07:30.200 { 00:07:30.200 "method": "sock_impl_set_options", 00:07:30.200 "params": { 00:07:30.200 "impl_name": "posix", 00:07:30.200 "recv_buf_size": 2097152, 00:07:30.200 "send_buf_size": 2097152, 00:07:30.201 "enable_recv_pipe": true, 00:07:30.201 "enable_quickack": false, 00:07:30.201 "enable_placement_id": 0, 00:07:30.201 "enable_zerocopy_send_server": true, 00:07:30.201 "enable_zerocopy_send_client": false, 00:07:30.201 "zerocopy_threshold": 0, 00:07:30.201 "tls_version": 0, 00:07:30.201 "enable_ktls": false 00:07:30.201 } 00:07:30.201 } 00:07:30.201 ] 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "subsystem": "vmd", 00:07:30.201 "config": [] 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "subsystem": "accel", 00:07:30.201 "config": [ 00:07:30.201 { 00:07:30.201 "method": "accel_set_options", 00:07:30.201 "params": { 00:07:30.201 "small_cache_size": 128, 00:07:30.201 "large_cache_size": 16, 00:07:30.201 "task_count": 2048, 00:07:30.201 "sequence_count": 2048, 00:07:30.201 "buf_count": 2048 00:07:30.201 } 00:07:30.201 } 00:07:30.201 ] 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "subsystem": "bdev", 00:07:30.201 "config": [ 00:07:30.201 { 00:07:30.201 "method": "bdev_set_options", 00:07:30.201 "params": { 00:07:30.201 "bdev_io_pool_size": 65535, 00:07:30.201 "bdev_io_cache_size": 256, 00:07:30.201 "bdev_auto_examine": true, 00:07:30.201 "iobuf_small_cache_size": 128, 00:07:30.201 "iobuf_large_cache_size": 16 00:07:30.201 } 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "method": "bdev_raid_set_options", 00:07:30.201 "params": { 00:07:30.201 "process_window_size_kb": 1024, 00:07:30.201 "process_max_bandwidth_mb_sec": 0 00:07:30.201 } 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "method": "bdev_iscsi_set_options", 00:07:30.201 "params": { 00:07:30.201 "timeout_sec": 30 00:07:30.201 } 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "method": "bdev_nvme_set_options", 00:07:30.201 "params": { 00:07:30.201 "action_on_timeout": "none", 00:07:30.201 "timeout_us": 0, 00:07:30.201 "timeout_admin_us": 0, 00:07:30.201 "keep_alive_timeout_ms": 10000, 00:07:30.201 "arbitration_burst": 0, 00:07:30.201 "low_priority_weight": 0, 00:07:30.201 "medium_priority_weight": 0, 00:07:30.201 "high_priority_weight": 0, 00:07:30.201 "nvme_adminq_poll_period_us": 10000, 00:07:30.201 "nvme_ioq_poll_period_us": 0, 00:07:30.201 "io_queue_requests": 0, 00:07:30.201 "delay_cmd_submit": true, 00:07:30.201 "transport_retry_count": 4, 00:07:30.201 "bdev_retry_count": 3, 00:07:30.201 "transport_ack_timeout": 0, 00:07:30.201 "ctrlr_loss_timeout_sec": 0, 00:07:30.201 "reconnect_delay_sec": 0, 00:07:30.201 "fast_io_fail_timeout_sec": 0, 00:07:30.201 "disable_auto_failback": false, 00:07:30.201 "generate_uuids": false, 00:07:30.201 "transport_tos": 0, 00:07:30.201 "nvme_error_stat": false, 00:07:30.201 "rdma_srq_size": 0, 00:07:30.201 "io_path_stat": false, 00:07:30.201 "allow_accel_sequence": false, 00:07:30.201 "rdma_max_cq_size": 0, 00:07:30.201 "rdma_cm_event_timeout_ms": 0, 00:07:30.201 "dhchap_digests": [ 00:07:30.201 "sha256", 00:07:30.201 "sha384", 00:07:30.201 "sha512" 00:07:30.201 ], 00:07:30.201 "dhchap_dhgroups": [ 00:07:30.201 "null", 00:07:30.201 "ffdhe2048", 00:07:30.201 "ffdhe3072", 00:07:30.201 "ffdhe4096", 00:07:30.201 "ffdhe6144", 00:07:30.201 "ffdhe8192" 00:07:30.201 ] 00:07:30.201 } 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "method": "bdev_nvme_set_hotplug", 00:07:30.201 "params": { 00:07:30.201 "period_us": 100000, 00:07:30.201 "enable": false 00:07:30.201 } 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "method": "bdev_wait_for_examine" 00:07:30.201 } 00:07:30.201 ] 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "subsystem": "scsi", 00:07:30.201 "config": null 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "subsystem": "scheduler", 00:07:30.201 "config": [ 00:07:30.201 { 00:07:30.201 "method": "framework_set_scheduler", 00:07:30.201 "params": { 00:07:30.201 "name": "static" 00:07:30.201 } 00:07:30.201 } 00:07:30.201 ] 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "subsystem": "vhost_scsi", 00:07:30.201 "config": [] 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "subsystem": "vhost_blk", 00:07:30.201 "config": [] 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "subsystem": "ublk", 00:07:30.201 "config": [] 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "subsystem": "nbd", 00:07:30.201 "config": [] 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "subsystem": "nvmf", 00:07:30.201 "config": [ 00:07:30.201 { 00:07:30.201 "method": "nvmf_set_config", 00:07:30.201 "params": { 00:07:30.201 "discovery_filter": "match_any", 00:07:30.201 "admin_cmd_passthru": { 00:07:30.201 "identify_ctrlr": false 00:07:30.201 }, 00:07:30.201 "dhchap_digests": [ 00:07:30.201 "sha256", 00:07:30.201 "sha384", 00:07:30.201 "sha512" 00:07:30.201 ], 00:07:30.201 "dhchap_dhgroups": [ 00:07:30.201 "null", 00:07:30.201 "ffdhe2048", 00:07:30.201 "ffdhe3072", 00:07:30.201 "ffdhe4096", 00:07:30.201 "ffdhe6144", 00:07:30.201 "ffdhe8192" 00:07:30.201 ] 00:07:30.201 } 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "method": "nvmf_set_max_subsystems", 00:07:30.201 "params": { 00:07:30.201 "max_subsystems": 1024 00:07:30.201 } 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "method": "nvmf_set_crdt", 00:07:30.201 "params": { 00:07:30.201 "crdt1": 0, 00:07:30.201 "crdt2": 0, 00:07:30.201 "crdt3": 0 00:07:30.201 } 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "method": "nvmf_create_transport", 00:07:30.201 "params": { 00:07:30.201 "trtype": "TCP", 00:07:30.201 "max_queue_depth": 128, 00:07:30.201 "max_io_qpairs_per_ctrlr": 127, 00:07:30.201 "in_capsule_data_size": 4096, 00:07:30.201 "max_io_size": 131072, 00:07:30.201 "io_unit_size": 131072, 00:07:30.201 "max_aq_depth": 128, 00:07:30.201 "num_shared_buffers": 511, 00:07:30.201 "buf_cache_size": 4294967295, 00:07:30.201 "dif_insert_or_strip": false, 00:07:30.201 "zcopy": false, 00:07:30.201 "c2h_success": true, 00:07:30.201 "sock_priority": 0, 00:07:30.201 "abort_timeout_sec": 1, 00:07:30.201 "ack_timeout": 0, 00:07:30.201 "data_wr_pool_size": 0 00:07:30.201 } 00:07:30.201 } 00:07:30.201 ] 00:07:30.201 }, 00:07:30.201 { 00:07:30.201 "subsystem": "iscsi", 00:07:30.201 "config": [ 00:07:30.201 { 00:07:30.201 "method": "iscsi_set_options", 00:07:30.201 "params": { 00:07:30.201 "node_base": "iqn.2016-06.io.spdk", 00:07:30.201 "max_sessions": 128, 00:07:30.201 "max_connections_per_session": 2, 00:07:30.201 "max_queue_depth": 64, 00:07:30.201 "default_time2wait": 2, 00:07:30.201 "default_time2retain": 20, 00:07:30.201 "first_burst_length": 8192, 00:07:30.201 "immediate_data": true, 00:07:30.201 "allow_duplicated_isid": false, 00:07:30.201 "error_recovery_level": 0, 00:07:30.201 "nop_timeout": 60, 00:07:30.201 "nop_in_interval": 30, 00:07:30.201 "disable_chap": false, 00:07:30.201 "require_chap": false, 00:07:30.201 "mutual_chap": false, 00:07:30.201 "chap_group": 0, 00:07:30.201 "max_large_datain_per_connection": 64, 00:07:30.201 "max_r2t_per_connection": 4, 00:07:30.201 "pdu_pool_size": 36864, 00:07:30.201 "immediate_data_pool_size": 16384, 00:07:30.201 "data_out_pool_size": 2048 00:07:30.201 } 00:07:30.201 } 00:07:30.201 ] 00:07:30.201 } 00:07:30.201 ] 00:07:30.201 } 00:07:30.201 08:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:30.201 08:26:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58617 00:07:30.201 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58617 ']' 00:07:30.201 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58617 00:07:30.201 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:30.201 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.201 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58617 00:07:30.201 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.201 killing process with pid 58617 00:07:30.201 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.201 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58617' 00:07:30.201 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58617 00:07:30.201 08:26:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58617 00:07:32.738 08:26:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58667 00:07:32.738 08:26:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:32.738 08:26:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:38.008 08:26:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58667 00:07:38.008 08:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58667 ']' 00:07:38.008 08:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58667 00:07:38.008 08:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:38.008 08:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.008 08:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58667 00:07:38.008 killing process with pid 58667 00:07:38.008 08:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.008 08:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.008 08:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58667' 00:07:38.008 08:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58667 00:07:38.008 08:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58667 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:39.387 00:07:39.387 real 0m10.540s 00:07:39.387 user 0m10.222s 00:07:39.387 sys 0m0.709s 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.387 ************************************ 00:07:39.387 END TEST skip_rpc_with_json 00:07:39.387 ************************************ 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:39.387 08:26:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:39.387 08:26:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.387 08:26:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.387 08:26:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.387 ************************************ 00:07:39.387 START TEST skip_rpc_with_delay 00:07:39.387 ************************************ 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:39.387 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:39.645 [2024-11-19 08:26:18.786753] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:39.645 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:39.645 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.645 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.645 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.645 00:07:39.645 real 0m0.193s 00:07:39.645 user 0m0.102s 00:07:39.645 sys 0m0.088s 00:07:39.645 ************************************ 00:07:39.645 END TEST skip_rpc_with_delay 00:07:39.645 ************************************ 00:07:39.645 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.645 08:26:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:39.645 08:26:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:39.645 08:26:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:39.645 08:26:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:39.645 08:26:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.645 08:26:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.645 08:26:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.645 ************************************ 00:07:39.645 START TEST exit_on_failed_rpc_init 00:07:39.645 ************************************ 00:07:39.645 08:26:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:39.645 08:26:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58801 00:07:39.645 08:26:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:39.645 08:26:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58801 00:07:39.645 08:26:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58801 ']' 00:07:39.645 08:26:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.645 08:26:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.645 08:26:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.645 08:26:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.645 08:26:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:39.904 [2024-11-19 08:26:19.053746] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:39.904 [2024-11-19 08:26:19.053916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58801 ] 00:07:40.163 [2024-11-19 08:26:19.234234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.163 [2024-11-19 08:26:19.337642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:41.100 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:41.100 [2024-11-19 08:26:20.224040] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:41.100 [2024-11-19 08:26:20.224220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58819 ] 00:07:41.359 [2024-11-19 08:26:20.405862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.359 [2024-11-19 08:26:20.513405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.359 [2024-11-19 08:26:20.513534] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:41.359 [2024-11-19 08:26:20.513559] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:41.359 [2024-11-19 08:26:20.513597] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58801 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58801 ']' 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58801 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58801 00:07:41.617 killing process with pid 58801 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58801' 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58801 00:07:41.617 08:26:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58801 00:07:44.149 00:07:44.149 real 0m3.985s 00:07:44.149 user 0m4.523s 00:07:44.149 sys 0m0.537s 00:07:44.149 08:26:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.149 ************************************ 00:07:44.149 END TEST exit_on_failed_rpc_init 00:07:44.149 ************************************ 00:07:44.149 08:26:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:44.149 08:26:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:44.149 ************************************ 00:07:44.149 END TEST skip_rpc 00:07:44.149 00:07:44.149 real 0m22.219s 00:07:44.149 user 0m21.688s 00:07:44.149 sys 0m1.877s 00:07:44.149 08:26:22 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.149 08:26:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.149 ************************************ 00:07:44.149 08:26:22 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:44.149 08:26:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.149 08:26:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.149 08:26:22 -- common/autotest_common.sh@10 -- # set +x 00:07:44.149 ************************************ 00:07:44.149 START TEST rpc_client 00:07:44.149 ************************************ 00:07:44.149 08:26:22 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:44.149 * Looking for test storage... 00:07:44.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:44.149 08:26:23 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:44.149 08:26:23 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:44.149 08:26:23 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:07:44.149 08:26:23 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.149 08:26:23 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:44.149 08:26:23 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.149 08:26:23 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:44.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.149 --rc genhtml_branch_coverage=1 00:07:44.149 --rc genhtml_function_coverage=1 00:07:44.149 --rc genhtml_legend=1 00:07:44.149 --rc geninfo_all_blocks=1 00:07:44.149 --rc geninfo_unexecuted_blocks=1 00:07:44.149 00:07:44.149 ' 00:07:44.149 08:26:23 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:44.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.149 --rc genhtml_branch_coverage=1 00:07:44.149 --rc genhtml_function_coverage=1 00:07:44.149 --rc genhtml_legend=1 00:07:44.149 --rc geninfo_all_blocks=1 00:07:44.149 --rc geninfo_unexecuted_blocks=1 00:07:44.149 00:07:44.149 ' 00:07:44.149 08:26:23 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:44.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.150 --rc genhtml_branch_coverage=1 00:07:44.150 --rc genhtml_function_coverage=1 00:07:44.150 --rc genhtml_legend=1 00:07:44.150 --rc geninfo_all_blocks=1 00:07:44.150 --rc geninfo_unexecuted_blocks=1 00:07:44.150 00:07:44.150 ' 00:07:44.150 08:26:23 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:44.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.150 --rc genhtml_branch_coverage=1 00:07:44.150 --rc genhtml_function_coverage=1 00:07:44.150 --rc genhtml_legend=1 00:07:44.150 --rc geninfo_all_blocks=1 00:07:44.150 --rc geninfo_unexecuted_blocks=1 00:07:44.150 00:07:44.150 ' 00:07:44.150 08:26:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:44.150 OK 00:07:44.150 08:26:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:44.150 00:07:44.150 real 0m0.259s 00:07:44.150 user 0m0.143s 00:07:44.150 sys 0m0.124s 00:07:44.150 08:26:23 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.150 08:26:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:44.150 ************************************ 00:07:44.150 END TEST rpc_client 00:07:44.150 ************************************ 00:07:44.150 08:26:23 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:44.150 08:26:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.150 08:26:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.150 08:26:23 -- common/autotest_common.sh@10 -- # set +x 00:07:44.150 ************************************ 00:07:44.150 START TEST json_config 00:07:44.150 ************************************ 00:07:44.150 08:26:23 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:44.150 08:26:23 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:44.150 08:26:23 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:07:44.150 08:26:23 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:44.150 08:26:23 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:44.150 08:26:23 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.150 08:26:23 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.150 08:26:23 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.150 08:26:23 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.150 08:26:23 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.150 08:26:23 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.150 08:26:23 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.150 08:26:23 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.150 08:26:23 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.150 08:26:23 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.150 08:26:23 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.150 08:26:23 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:44.150 08:26:23 json_config -- scripts/common.sh@345 -- # : 1 00:07:44.150 08:26:23 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.150 08:26:23 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.150 08:26:23 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:44.150 08:26:23 json_config -- scripts/common.sh@353 -- # local d=1 00:07:44.150 08:26:23 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.150 08:26:23 json_config -- scripts/common.sh@355 -- # echo 1 00:07:44.150 08:26:23 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.409 08:26:23 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:44.409 08:26:23 json_config -- scripts/common.sh@353 -- # local d=2 00:07:44.409 08:26:23 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.409 08:26:23 json_config -- scripts/common.sh@355 -- # echo 2 00:07:44.409 08:26:23 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.409 08:26:23 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.409 08:26:23 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.409 08:26:23 json_config -- scripts/common.sh@368 -- # return 0 00:07:44.409 08:26:23 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.409 08:26:23 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.409 --rc genhtml_branch_coverage=1 00:07:44.409 --rc genhtml_function_coverage=1 00:07:44.409 --rc genhtml_legend=1 00:07:44.409 --rc geninfo_all_blocks=1 00:07:44.409 --rc geninfo_unexecuted_blocks=1 00:07:44.409 00:07:44.409 ' 00:07:44.409 08:26:23 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.409 --rc genhtml_branch_coverage=1 00:07:44.409 --rc genhtml_function_coverage=1 00:07:44.409 --rc genhtml_legend=1 00:07:44.409 --rc geninfo_all_blocks=1 00:07:44.409 --rc geninfo_unexecuted_blocks=1 00:07:44.409 00:07:44.409 ' 00:07:44.409 08:26:23 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.409 --rc genhtml_branch_coverage=1 00:07:44.409 --rc genhtml_function_coverage=1 00:07:44.409 --rc genhtml_legend=1 00:07:44.409 --rc geninfo_all_blocks=1 00:07:44.409 --rc geninfo_unexecuted_blocks=1 00:07:44.409 00:07:44.409 ' 00:07:44.409 08:26:23 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:44.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.409 --rc genhtml_branch_coverage=1 00:07:44.409 --rc genhtml_function_coverage=1 00:07:44.409 --rc genhtml_legend=1 00:07:44.409 --rc geninfo_all_blocks=1 00:07:44.409 --rc geninfo_unexecuted_blocks=1 00:07:44.409 00:07:44.409 ' 00:07:44.409 08:26:23 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b92d3187-dd6c-45ef-abdc-6bb81c9ac50e 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b92d3187-dd6c-45ef-abdc-6bb81c9ac50e 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:44.409 08:26:23 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.409 08:26:23 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.409 08:26:23 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.409 08:26:23 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.409 08:26:23 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.409 08:26:23 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.409 08:26:23 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.409 08:26:23 json_config -- paths/export.sh@5 -- # export PATH 00:07:44.409 08:26:23 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@51 -- # : 0 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.409 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.409 08:26:23 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.409 08:26:23 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:44.409 08:26:23 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:44.409 08:26:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:44.409 08:26:23 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:44.409 08:26:23 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:44.410 08:26:23 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:44.410 WARNING: No tests are enabled so not running JSON configuration tests 00:07:44.410 08:26:23 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:44.410 00:07:44.410 real 0m0.183s 00:07:44.410 user 0m0.110s 00:07:44.410 sys 0m0.073s 00:07:44.410 08:26:23 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.410 08:26:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:44.410 ************************************ 00:07:44.410 END TEST json_config 00:07:44.410 ************************************ 00:07:44.410 08:26:23 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:44.410 08:26:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.410 08:26:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.410 08:26:23 -- common/autotest_common.sh@10 -- # set +x 00:07:44.410 ************************************ 00:07:44.410 START TEST json_config_extra_key 00:07:44.410 ************************************ 00:07:44.410 08:26:23 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:44.410 08:26:23 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:44.410 08:26:23 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:44.410 08:26:23 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:44.410 08:26:23 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.410 08:26:23 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:44.410 08:26:23 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.410 08:26:23 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:44.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.410 --rc genhtml_branch_coverage=1 00:07:44.410 --rc genhtml_function_coverage=1 00:07:44.410 --rc genhtml_legend=1 00:07:44.410 --rc geninfo_all_blocks=1 00:07:44.410 --rc geninfo_unexecuted_blocks=1 00:07:44.410 00:07:44.410 ' 00:07:44.410 08:26:23 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:44.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.410 --rc genhtml_branch_coverage=1 00:07:44.410 --rc genhtml_function_coverage=1 00:07:44.410 --rc genhtml_legend=1 00:07:44.410 --rc geninfo_all_blocks=1 00:07:44.410 --rc geninfo_unexecuted_blocks=1 00:07:44.410 00:07:44.410 ' 00:07:44.410 08:26:23 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:44.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.410 --rc genhtml_branch_coverage=1 00:07:44.410 --rc genhtml_function_coverage=1 00:07:44.410 --rc genhtml_legend=1 00:07:44.410 --rc geninfo_all_blocks=1 00:07:44.410 --rc geninfo_unexecuted_blocks=1 00:07:44.410 00:07:44.410 ' 00:07:44.410 08:26:23 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:44.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.410 --rc genhtml_branch_coverage=1 00:07:44.410 --rc genhtml_function_coverage=1 00:07:44.410 --rc genhtml_legend=1 00:07:44.410 --rc geninfo_all_blocks=1 00:07:44.410 --rc geninfo_unexecuted_blocks=1 00:07:44.410 00:07:44.410 ' 00:07:44.410 08:26:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:44.410 08:26:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:44.410 08:26:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.410 08:26:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.410 08:26:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.410 08:26:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.410 08:26:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.410 08:26:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.410 08:26:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.410 08:26:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.410 08:26:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b92d3187-dd6c-45ef-abdc-6bb81c9ac50e 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b92d3187-dd6c-45ef-abdc-6bb81c9ac50e 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:44.668 08:26:23 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.668 08:26:23 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.668 08:26:23 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.668 08:26:23 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.668 08:26:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.668 08:26:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.668 08:26:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.668 08:26:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:44.668 08:26:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.668 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.668 08:26:23 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.668 08:26:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:44.668 INFO: launching applications... 00:07:44.668 08:26:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:44.668 08:26:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:44.668 08:26:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:44.668 08:26:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:44.668 08:26:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:44.668 08:26:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:44.668 08:26:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:44.668 08:26:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:44.668 08:26:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:44.668 08:26:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:44.668 08:26:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:44.668 08:26:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:44.668 08:26:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:44.668 08:26:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:44.668 08:26:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:44.669 08:26:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:44.669 08:26:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:44.669 08:26:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:44.669 08:26:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59018 00:07:44.669 08:26:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:44.669 08:26:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:44.669 Waiting for target to run... 00:07:44.669 08:26:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59018 /var/tmp/spdk_tgt.sock 00:07:44.669 08:26:23 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59018 ']' 00:07:44.669 08:26:23 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:44.669 08:26:23 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.669 08:26:23 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:44.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:44.669 08:26:23 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.669 08:26:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:44.669 [2024-11-19 08:26:23.831732] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:44.669 [2024-11-19 08:26:23.832104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59018 ] 00:07:44.927 [2024-11-19 08:26:24.173100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.185 [2024-11-19 08:26:24.271452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.751 00:07:45.751 INFO: shutting down applications... 00:07:45.751 08:26:24 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.751 08:26:24 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:45.751 08:26:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:45.751 08:26:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:45.751 08:26:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:45.751 08:26:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:45.751 08:26:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:45.751 08:26:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59018 ]] 00:07:45.751 08:26:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59018 00:07:45.751 08:26:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:45.751 08:26:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:45.751 08:26:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59018 00:07:45.751 08:26:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:46.318 08:26:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:46.318 08:26:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:46.318 08:26:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59018 00:07:46.318 08:26:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:46.886 08:26:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:46.886 08:26:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:46.886 08:26:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59018 00:07:46.886 08:26:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:47.454 08:26:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:47.454 08:26:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:47.454 08:26:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59018 00:07:47.454 08:26:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:47.713 08:26:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:47.713 08:26:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:47.713 08:26:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59018 00:07:47.713 08:26:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:48.375 08:26:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:48.375 08:26:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:48.375 08:26:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59018 00:07:48.375 08:26:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:48.375 08:26:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:48.375 SPDK target shutdown done 00:07:48.375 Success 00:07:48.375 08:26:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:48.375 08:26:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:48.375 08:26:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:48.375 00:07:48.375 real 0m3.946s 00:07:48.375 user 0m3.761s 00:07:48.375 sys 0m0.496s 00:07:48.375 ************************************ 00:07:48.375 END TEST json_config_extra_key 00:07:48.375 ************************************ 00:07:48.375 08:26:27 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.375 08:26:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:48.375 08:26:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:48.375 08:26:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.375 08:26:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.375 08:26:27 -- common/autotest_common.sh@10 -- # set +x 00:07:48.375 ************************************ 00:07:48.375 START TEST alias_rpc 00:07:48.375 ************************************ 00:07:48.375 08:26:27 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:48.375 * Looking for test storage... 00:07:48.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:48.375 08:26:27 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:48.375 08:26:27 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:48.375 08:26:27 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:48.634 08:26:27 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.634 08:26:27 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:48.634 08:26:27 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.634 08:26:27 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:48.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.634 --rc genhtml_branch_coverage=1 00:07:48.634 --rc genhtml_function_coverage=1 00:07:48.634 --rc genhtml_legend=1 00:07:48.634 --rc geninfo_all_blocks=1 00:07:48.634 --rc geninfo_unexecuted_blocks=1 00:07:48.634 00:07:48.634 ' 00:07:48.634 08:26:27 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:48.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.634 --rc genhtml_branch_coverage=1 00:07:48.634 --rc genhtml_function_coverage=1 00:07:48.635 --rc genhtml_legend=1 00:07:48.635 --rc geninfo_all_blocks=1 00:07:48.635 --rc geninfo_unexecuted_blocks=1 00:07:48.635 00:07:48.635 ' 00:07:48.635 08:26:27 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:48.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.635 --rc genhtml_branch_coverage=1 00:07:48.635 --rc genhtml_function_coverage=1 00:07:48.635 --rc genhtml_legend=1 00:07:48.635 --rc geninfo_all_blocks=1 00:07:48.635 --rc geninfo_unexecuted_blocks=1 00:07:48.635 00:07:48.635 ' 00:07:48.635 08:26:27 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:48.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.635 --rc genhtml_branch_coverage=1 00:07:48.635 --rc genhtml_function_coverage=1 00:07:48.635 --rc genhtml_legend=1 00:07:48.635 --rc geninfo_all_blocks=1 00:07:48.635 --rc geninfo_unexecuted_blocks=1 00:07:48.635 00:07:48.635 ' 00:07:48.635 08:26:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:48.635 08:26:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59123 00:07:48.635 08:26:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59123 00:07:48.635 08:26:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:48.635 08:26:27 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59123 ']' 00:07:48.635 08:26:27 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.635 08:26:27 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.635 08:26:27 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.635 08:26:27 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.635 08:26:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.635 [2024-11-19 08:26:27.865877] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:48.635 [2024-11-19 08:26:27.866306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59123 ] 00:07:48.894 [2024-11-19 08:26:28.055107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.153 [2024-11-19 08:26:28.186246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.719 08:26:28 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.719 08:26:28 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:49.719 08:26:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:50.284 08:26:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59123 00:07:50.284 08:26:29 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59123 ']' 00:07:50.284 08:26:29 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59123 00:07:50.284 08:26:29 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:50.284 08:26:29 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.284 08:26:29 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59123 00:07:50.284 killing process with pid 59123 00:07:50.284 08:26:29 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.284 08:26:29 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.284 08:26:29 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59123' 00:07:50.284 08:26:29 alias_rpc -- common/autotest_common.sh@973 -- # kill 59123 00:07:50.284 08:26:29 alias_rpc -- common/autotest_common.sh@978 -- # wait 59123 00:07:52.190 ************************************ 00:07:52.190 END TEST alias_rpc 00:07:52.190 ************************************ 00:07:52.190 00:07:52.190 real 0m3.931s 00:07:52.190 user 0m4.204s 00:07:52.190 sys 0m0.498s 00:07:52.190 08:26:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.190 08:26:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.449 08:26:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:52.449 08:26:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:52.449 08:26:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.449 08:26:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.449 08:26:31 -- common/autotest_common.sh@10 -- # set +x 00:07:52.449 ************************************ 00:07:52.449 START TEST spdkcli_tcp 00:07:52.449 ************************************ 00:07:52.449 08:26:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:52.449 * Looking for test storage... 00:07:52.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:52.449 08:26:31 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:52.449 08:26:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.450 08:26:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:52.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.450 --rc genhtml_branch_coverage=1 00:07:52.450 --rc genhtml_function_coverage=1 00:07:52.450 --rc genhtml_legend=1 00:07:52.450 --rc geninfo_all_blocks=1 00:07:52.450 --rc geninfo_unexecuted_blocks=1 00:07:52.450 00:07:52.450 ' 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:52.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.450 --rc genhtml_branch_coverage=1 00:07:52.450 --rc genhtml_function_coverage=1 00:07:52.450 --rc genhtml_legend=1 00:07:52.450 --rc geninfo_all_blocks=1 00:07:52.450 --rc geninfo_unexecuted_blocks=1 00:07:52.450 00:07:52.450 ' 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:52.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.450 --rc genhtml_branch_coverage=1 00:07:52.450 --rc genhtml_function_coverage=1 00:07:52.450 --rc genhtml_legend=1 00:07:52.450 --rc geninfo_all_blocks=1 00:07:52.450 --rc geninfo_unexecuted_blocks=1 00:07:52.450 00:07:52.450 ' 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:52.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.450 --rc genhtml_branch_coverage=1 00:07:52.450 --rc genhtml_function_coverage=1 00:07:52.450 --rc genhtml_legend=1 00:07:52.450 --rc geninfo_all_blocks=1 00:07:52.450 --rc geninfo_unexecuted_blocks=1 00:07:52.450 00:07:52.450 ' 00:07:52.450 08:26:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:52.450 08:26:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:52.450 08:26:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:52.450 08:26:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:52.450 08:26:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:52.450 08:26:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:52.450 08:26:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.450 08:26:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59232 00:07:52.450 08:26:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59232 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59232 ']' 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.450 08:26:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.450 08:26:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:52.709 [2024-11-19 08:26:31.879031] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:52.709 [2024-11-19 08:26:31.879202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59232 ] 00:07:52.967 [2024-11-19 08:26:32.058204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.967 [2024-11-19 08:26:32.170342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.967 [2024-11-19 08:26:32.170354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.920 08:26:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.920 08:26:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:53.920 08:26:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59249 00:07:53.920 08:26:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:53.920 08:26:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:54.195 [ 00:07:54.195 "bdev_malloc_delete", 00:07:54.195 "bdev_malloc_create", 00:07:54.195 "bdev_null_resize", 00:07:54.195 "bdev_null_delete", 00:07:54.195 "bdev_null_create", 00:07:54.195 "bdev_nvme_cuse_unregister", 00:07:54.195 "bdev_nvme_cuse_register", 00:07:54.195 "bdev_opal_new_user", 00:07:54.195 "bdev_opal_set_lock_state", 00:07:54.195 "bdev_opal_delete", 00:07:54.195 "bdev_opal_get_info", 00:07:54.195 "bdev_opal_create", 00:07:54.195 "bdev_nvme_opal_revert", 00:07:54.195 "bdev_nvme_opal_init", 00:07:54.195 "bdev_nvme_send_cmd", 00:07:54.195 "bdev_nvme_set_keys", 00:07:54.195 "bdev_nvme_get_path_iostat", 00:07:54.195 "bdev_nvme_get_mdns_discovery_info", 00:07:54.195 "bdev_nvme_stop_mdns_discovery", 00:07:54.195 "bdev_nvme_start_mdns_discovery", 00:07:54.195 "bdev_nvme_set_multipath_policy", 00:07:54.195 "bdev_nvme_set_preferred_path", 00:07:54.195 "bdev_nvme_get_io_paths", 00:07:54.195 "bdev_nvme_remove_error_injection", 00:07:54.195 "bdev_nvme_add_error_injection", 00:07:54.195 "bdev_nvme_get_discovery_info", 00:07:54.195 "bdev_nvme_stop_discovery", 00:07:54.195 "bdev_nvme_start_discovery", 00:07:54.195 "bdev_nvme_get_controller_health_info", 00:07:54.195 "bdev_nvme_disable_controller", 00:07:54.195 "bdev_nvme_enable_controller", 00:07:54.195 "bdev_nvme_reset_controller", 00:07:54.195 "bdev_nvme_get_transport_statistics", 00:07:54.195 "bdev_nvme_apply_firmware", 00:07:54.195 "bdev_nvme_detach_controller", 00:07:54.195 "bdev_nvme_get_controllers", 00:07:54.195 "bdev_nvme_attach_controller", 00:07:54.195 "bdev_nvme_set_hotplug", 00:07:54.195 "bdev_nvme_set_options", 00:07:54.195 "bdev_passthru_delete", 00:07:54.195 "bdev_passthru_create", 00:07:54.196 "bdev_lvol_set_parent_bdev", 00:07:54.196 "bdev_lvol_set_parent", 00:07:54.196 "bdev_lvol_check_shallow_copy", 00:07:54.196 "bdev_lvol_start_shallow_copy", 00:07:54.196 "bdev_lvol_grow_lvstore", 00:07:54.196 "bdev_lvol_get_lvols", 00:07:54.196 "bdev_lvol_get_lvstores", 00:07:54.196 "bdev_lvol_delete", 00:07:54.196 "bdev_lvol_set_read_only", 00:07:54.196 "bdev_lvol_resize", 00:07:54.196 "bdev_lvol_decouple_parent", 00:07:54.196 "bdev_lvol_inflate", 00:07:54.196 "bdev_lvol_rename", 00:07:54.196 "bdev_lvol_clone_bdev", 00:07:54.196 "bdev_lvol_clone", 00:07:54.196 "bdev_lvol_snapshot", 00:07:54.196 "bdev_lvol_create", 00:07:54.196 "bdev_lvol_delete_lvstore", 00:07:54.196 "bdev_lvol_rename_lvstore", 00:07:54.196 "bdev_lvol_create_lvstore", 00:07:54.196 "bdev_raid_set_options", 00:07:54.196 "bdev_raid_remove_base_bdev", 00:07:54.196 "bdev_raid_add_base_bdev", 00:07:54.196 "bdev_raid_delete", 00:07:54.196 "bdev_raid_create", 00:07:54.196 "bdev_raid_get_bdevs", 00:07:54.196 "bdev_error_inject_error", 00:07:54.196 "bdev_error_delete", 00:07:54.196 "bdev_error_create", 00:07:54.196 "bdev_split_delete", 00:07:54.196 "bdev_split_create", 00:07:54.196 "bdev_delay_delete", 00:07:54.196 "bdev_delay_create", 00:07:54.196 "bdev_delay_update_latency", 00:07:54.196 "bdev_zone_block_delete", 00:07:54.196 "bdev_zone_block_create", 00:07:54.196 "blobfs_create", 00:07:54.196 "blobfs_detect", 00:07:54.196 "blobfs_set_cache_size", 00:07:54.196 "bdev_xnvme_delete", 00:07:54.196 "bdev_xnvme_create", 00:07:54.196 "bdev_aio_delete", 00:07:54.196 "bdev_aio_rescan", 00:07:54.196 "bdev_aio_create", 00:07:54.196 "bdev_ftl_set_property", 00:07:54.196 "bdev_ftl_get_properties", 00:07:54.196 "bdev_ftl_get_stats", 00:07:54.196 "bdev_ftl_unmap", 00:07:54.196 "bdev_ftl_unload", 00:07:54.196 "bdev_ftl_delete", 00:07:54.196 "bdev_ftl_load", 00:07:54.196 "bdev_ftl_create", 00:07:54.196 "bdev_virtio_attach_controller", 00:07:54.196 "bdev_virtio_scsi_get_devices", 00:07:54.196 "bdev_virtio_detach_controller", 00:07:54.196 "bdev_virtio_blk_set_hotplug", 00:07:54.196 "bdev_iscsi_delete", 00:07:54.196 "bdev_iscsi_create", 00:07:54.196 "bdev_iscsi_set_options", 00:07:54.196 "accel_error_inject_error", 00:07:54.196 "ioat_scan_accel_module", 00:07:54.196 "dsa_scan_accel_module", 00:07:54.196 "iaa_scan_accel_module", 00:07:54.196 "keyring_file_remove_key", 00:07:54.196 "keyring_file_add_key", 00:07:54.196 "keyring_linux_set_options", 00:07:54.196 "fsdev_aio_delete", 00:07:54.196 "fsdev_aio_create", 00:07:54.196 "iscsi_get_histogram", 00:07:54.196 "iscsi_enable_histogram", 00:07:54.196 "iscsi_set_options", 00:07:54.196 "iscsi_get_auth_groups", 00:07:54.196 "iscsi_auth_group_remove_secret", 00:07:54.196 "iscsi_auth_group_add_secret", 00:07:54.196 "iscsi_delete_auth_group", 00:07:54.196 "iscsi_create_auth_group", 00:07:54.196 "iscsi_set_discovery_auth", 00:07:54.196 "iscsi_get_options", 00:07:54.196 "iscsi_target_node_request_logout", 00:07:54.196 "iscsi_target_node_set_redirect", 00:07:54.196 "iscsi_target_node_set_auth", 00:07:54.196 "iscsi_target_node_add_lun", 00:07:54.196 "iscsi_get_stats", 00:07:54.196 "iscsi_get_connections", 00:07:54.196 "iscsi_portal_group_set_auth", 00:07:54.196 "iscsi_start_portal_group", 00:07:54.196 "iscsi_delete_portal_group", 00:07:54.196 "iscsi_create_portal_group", 00:07:54.196 "iscsi_get_portal_groups", 00:07:54.196 "iscsi_delete_target_node", 00:07:54.196 "iscsi_target_node_remove_pg_ig_maps", 00:07:54.196 "iscsi_target_node_add_pg_ig_maps", 00:07:54.196 "iscsi_create_target_node", 00:07:54.196 "iscsi_get_target_nodes", 00:07:54.196 "iscsi_delete_initiator_group", 00:07:54.196 "iscsi_initiator_group_remove_initiators", 00:07:54.196 "iscsi_initiator_group_add_initiators", 00:07:54.196 "iscsi_create_initiator_group", 00:07:54.196 "iscsi_get_initiator_groups", 00:07:54.196 "nvmf_set_crdt", 00:07:54.196 "nvmf_set_config", 00:07:54.196 "nvmf_set_max_subsystems", 00:07:54.196 "nvmf_stop_mdns_prr", 00:07:54.196 "nvmf_publish_mdns_prr", 00:07:54.196 "nvmf_subsystem_get_listeners", 00:07:54.196 "nvmf_subsystem_get_qpairs", 00:07:54.196 "nvmf_subsystem_get_controllers", 00:07:54.196 "nvmf_get_stats", 00:07:54.196 "nvmf_get_transports", 00:07:54.196 "nvmf_create_transport", 00:07:54.196 "nvmf_get_targets", 00:07:54.196 "nvmf_delete_target", 00:07:54.196 "nvmf_create_target", 00:07:54.196 "nvmf_subsystem_allow_any_host", 00:07:54.196 "nvmf_subsystem_set_keys", 00:07:54.196 "nvmf_subsystem_remove_host", 00:07:54.196 "nvmf_subsystem_add_host", 00:07:54.196 "nvmf_ns_remove_host", 00:07:54.196 "nvmf_ns_add_host", 00:07:54.196 "nvmf_subsystem_remove_ns", 00:07:54.196 "nvmf_subsystem_set_ns_ana_group", 00:07:54.196 "nvmf_subsystem_add_ns", 00:07:54.196 "nvmf_subsystem_listener_set_ana_state", 00:07:54.196 "nvmf_discovery_get_referrals", 00:07:54.196 "nvmf_discovery_remove_referral", 00:07:54.196 "nvmf_discovery_add_referral", 00:07:54.196 "nvmf_subsystem_remove_listener", 00:07:54.196 "nvmf_subsystem_add_listener", 00:07:54.196 "nvmf_delete_subsystem", 00:07:54.196 "nvmf_create_subsystem", 00:07:54.196 "nvmf_get_subsystems", 00:07:54.196 "env_dpdk_get_mem_stats", 00:07:54.196 "nbd_get_disks", 00:07:54.196 "nbd_stop_disk", 00:07:54.196 "nbd_start_disk", 00:07:54.196 "ublk_recover_disk", 00:07:54.196 "ublk_get_disks", 00:07:54.196 "ublk_stop_disk", 00:07:54.196 "ublk_start_disk", 00:07:54.196 "ublk_destroy_target", 00:07:54.196 "ublk_create_target", 00:07:54.196 "virtio_blk_create_transport", 00:07:54.196 "virtio_blk_get_transports", 00:07:54.196 "vhost_controller_set_coalescing", 00:07:54.196 "vhost_get_controllers", 00:07:54.196 "vhost_delete_controller", 00:07:54.196 "vhost_create_blk_controller", 00:07:54.196 "vhost_scsi_controller_remove_target", 00:07:54.196 "vhost_scsi_controller_add_target", 00:07:54.196 "vhost_start_scsi_controller", 00:07:54.196 "vhost_create_scsi_controller", 00:07:54.196 "thread_set_cpumask", 00:07:54.196 "scheduler_set_options", 00:07:54.196 "framework_get_governor", 00:07:54.196 "framework_get_scheduler", 00:07:54.196 "framework_set_scheduler", 00:07:54.196 "framework_get_reactors", 00:07:54.196 "thread_get_io_channels", 00:07:54.196 "thread_get_pollers", 00:07:54.196 "thread_get_stats", 00:07:54.196 "framework_monitor_context_switch", 00:07:54.196 "spdk_kill_instance", 00:07:54.196 "log_enable_timestamps", 00:07:54.196 "log_get_flags", 00:07:54.196 "log_clear_flag", 00:07:54.196 "log_set_flag", 00:07:54.196 "log_get_level", 00:07:54.196 "log_set_level", 00:07:54.196 "log_get_print_level", 00:07:54.196 "log_set_print_level", 00:07:54.196 "framework_enable_cpumask_locks", 00:07:54.196 "framework_disable_cpumask_locks", 00:07:54.196 "framework_wait_init", 00:07:54.196 "framework_start_init", 00:07:54.196 "scsi_get_devices", 00:07:54.196 "bdev_get_histogram", 00:07:54.196 "bdev_enable_histogram", 00:07:54.196 "bdev_set_qos_limit", 00:07:54.196 "bdev_set_qd_sampling_period", 00:07:54.196 "bdev_get_bdevs", 00:07:54.196 "bdev_reset_iostat", 00:07:54.196 "bdev_get_iostat", 00:07:54.196 "bdev_examine", 00:07:54.196 "bdev_wait_for_examine", 00:07:54.196 "bdev_set_options", 00:07:54.196 "accel_get_stats", 00:07:54.196 "accel_set_options", 00:07:54.196 "accel_set_driver", 00:07:54.196 "accel_crypto_key_destroy", 00:07:54.196 "accel_crypto_keys_get", 00:07:54.196 "accel_crypto_key_create", 00:07:54.196 "accel_assign_opc", 00:07:54.196 "accel_get_module_info", 00:07:54.196 "accel_get_opc_assignments", 00:07:54.196 "vmd_rescan", 00:07:54.196 "vmd_remove_device", 00:07:54.196 "vmd_enable", 00:07:54.196 "sock_get_default_impl", 00:07:54.196 "sock_set_default_impl", 00:07:54.196 "sock_impl_set_options", 00:07:54.196 "sock_impl_get_options", 00:07:54.196 "iobuf_get_stats", 00:07:54.196 "iobuf_set_options", 00:07:54.196 "keyring_get_keys", 00:07:54.196 "framework_get_pci_devices", 00:07:54.196 "framework_get_config", 00:07:54.196 "framework_get_subsystems", 00:07:54.196 "fsdev_set_opts", 00:07:54.196 "fsdev_get_opts", 00:07:54.196 "trace_get_info", 00:07:54.196 "trace_get_tpoint_group_mask", 00:07:54.196 "trace_disable_tpoint_group", 00:07:54.196 "trace_enable_tpoint_group", 00:07:54.196 "trace_clear_tpoint_mask", 00:07:54.196 "trace_set_tpoint_mask", 00:07:54.196 "notify_get_notifications", 00:07:54.196 "notify_get_types", 00:07:54.196 "spdk_get_version", 00:07:54.196 "rpc_get_methods" 00:07:54.196 ] 00:07:54.196 08:26:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:54.196 08:26:33 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:54.196 08:26:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:54.196 08:26:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:54.196 08:26:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59232 00:07:54.196 08:26:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59232 ']' 00:07:54.196 08:26:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59232 00:07:54.196 08:26:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:54.196 08:26:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.196 08:26:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59232 00:07:54.197 killing process with pid 59232 00:07:54.197 08:26:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.197 08:26:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.197 08:26:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59232' 00:07:54.197 08:26:33 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59232 00:07:54.197 08:26:33 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59232 00:07:56.730 ************************************ 00:07:56.730 END TEST spdkcli_tcp 00:07:56.730 ************************************ 00:07:56.730 00:07:56.730 real 0m3.886s 00:07:56.730 user 0m7.163s 00:07:56.730 sys 0m0.576s 00:07:56.730 08:26:35 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.730 08:26:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:56.730 08:26:35 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:56.730 08:26:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.730 08:26:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.730 08:26:35 -- common/autotest_common.sh@10 -- # set +x 00:07:56.730 ************************************ 00:07:56.730 START TEST dpdk_mem_utility 00:07:56.730 ************************************ 00:07:56.730 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:56.730 * Looking for test storage... 00:07:56.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:56.730 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.730 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.730 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.730 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.730 08:26:35 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:56.730 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.730 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.730 --rc genhtml_branch_coverage=1 00:07:56.730 --rc genhtml_function_coverage=1 00:07:56.730 --rc genhtml_legend=1 00:07:56.730 --rc geninfo_all_blocks=1 00:07:56.730 --rc geninfo_unexecuted_blocks=1 00:07:56.730 00:07:56.730 ' 00:07:56.730 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.730 --rc genhtml_branch_coverage=1 00:07:56.730 --rc genhtml_function_coverage=1 00:07:56.730 --rc genhtml_legend=1 00:07:56.730 --rc geninfo_all_blocks=1 00:07:56.730 --rc geninfo_unexecuted_blocks=1 00:07:56.730 00:07:56.730 ' 00:07:56.730 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.730 --rc genhtml_branch_coverage=1 00:07:56.730 --rc genhtml_function_coverage=1 00:07:56.730 --rc genhtml_legend=1 00:07:56.730 --rc geninfo_all_blocks=1 00:07:56.730 --rc geninfo_unexecuted_blocks=1 00:07:56.730 00:07:56.730 ' 00:07:56.730 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.730 --rc genhtml_branch_coverage=1 00:07:56.730 --rc genhtml_function_coverage=1 00:07:56.730 --rc genhtml_legend=1 00:07:56.731 --rc geninfo_all_blocks=1 00:07:56.731 --rc geninfo_unexecuted_blocks=1 00:07:56.731 00:07:56.731 ' 00:07:56.731 08:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:56.731 08:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59354 00:07:56.731 08:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59354 00:07:56.731 08:26:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:56.731 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59354 ']' 00:07:56.731 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.731 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.731 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.731 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.731 08:26:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:56.731 [2024-11-19 08:26:35.721775] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:07:56.731 [2024-11-19 08:26:35.722633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59354 ] 00:07:56.731 [2024-11-19 08:26:35.901165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.990 [2024-11-19 08:26:36.069314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.558 08:26:36 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.558 08:26:36 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:57.558 08:26:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:57.558 08:26:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:57.558 08:26:36 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.558 08:26:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:57.558 { 00:07:57.818 "filename": "/tmp/spdk_mem_dump.txt" 00:07:57.818 } 00:07:57.818 08:26:36 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.818 08:26:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:57.818 DPDK memory size 816.000000 MiB in 1 heap(s) 00:07:57.818 1 heaps totaling size 816.000000 MiB 00:07:57.818 size: 816.000000 MiB heap id: 0 00:07:57.818 end heaps---------- 00:07:57.818 9 mempools totaling size 595.772034 MiB 00:07:57.818 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:57.818 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:57.818 size: 92.545471 MiB name: bdev_io_59354 00:07:57.818 size: 50.003479 MiB name: msgpool_59354 00:07:57.818 size: 36.509338 MiB name: fsdev_io_59354 00:07:57.818 size: 21.763794 MiB name: PDU_Pool 00:07:57.818 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:57.818 size: 4.133484 MiB name: evtpool_59354 00:07:57.818 size: 0.026123 MiB name: Session_Pool 00:07:57.818 end mempools------- 00:07:57.818 6 memzones totaling size 4.142822 MiB 00:07:57.818 size: 1.000366 MiB name: RG_ring_0_59354 00:07:57.818 size: 1.000366 MiB name: RG_ring_1_59354 00:07:57.818 size: 1.000366 MiB name: RG_ring_4_59354 00:07:57.818 size: 1.000366 MiB name: RG_ring_5_59354 00:07:57.818 size: 0.125366 MiB name: RG_ring_2_59354 00:07:57.818 size: 0.015991 MiB name: RG_ring_3_59354 00:07:57.818 end memzones------- 00:07:57.818 08:26:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:57.818 heap id: 0 total size: 816.000000 MiB number of busy elements: 316 number of free elements: 18 00:07:57.818 list of free elements. size: 16.791138 MiB 00:07:57.818 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:57.818 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:57.818 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:57.818 element at address: 0x200018d00040 with size: 0.999939 MiB 00:07:57.818 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:57.818 element at address: 0x200019200000 with size: 0.999084 MiB 00:07:57.818 element at address: 0x200031e00000 with size: 0.994324 MiB 00:07:57.818 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:57.818 element at address: 0x200018a00000 with size: 0.959656 MiB 00:07:57.818 element at address: 0x200019500040 with size: 0.936401 MiB 00:07:57.818 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:57.818 element at address: 0x20001ac00000 with size: 0.561462 MiB 00:07:57.818 element at address: 0x200000c00000 with size: 0.490173 MiB 00:07:57.818 element at address: 0x200018e00000 with size: 0.487976 MiB 00:07:57.818 element at address: 0x200019600000 with size: 0.485413 MiB 00:07:57.818 element at address: 0x200012c00000 with size: 0.443481 MiB 00:07:57.818 element at address: 0x200028000000 with size: 0.390442 MiB 00:07:57.818 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:57.818 list of standard malloc elements. size: 199.287964 MiB 00:07:57.818 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:57.818 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:57.818 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:07:57.818 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:57.818 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:57.818 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:57.818 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:07:57.818 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:57.818 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:57.818 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:07:57.818 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:57.818 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:57.818 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:57.818 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:57.818 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012c71880 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012c71980 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012c72080 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012c72180 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:07:57.819 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:07:57.819 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:07:57.819 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:07:57.820 element at address: 0x200028063f40 with size: 0.000244 MiB 00:07:57.820 element at address: 0x200028064040 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806af80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806b080 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806b180 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806b280 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806b380 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806b480 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806b580 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806b680 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806b780 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806b880 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806b980 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806be80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806c080 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806c180 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806c280 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806c380 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806c480 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806c580 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806c680 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806c780 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806c880 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806c980 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806d080 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806d180 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806d280 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806d380 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806d480 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806d580 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806d680 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806d780 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806d880 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806d980 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806da80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806db80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806de80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806df80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806e080 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806e180 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806e280 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806e380 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806e480 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806e580 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806e680 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806e780 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806e880 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806e980 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806f080 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806f180 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806f280 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806f380 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806f480 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806f580 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806f680 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806f780 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806f880 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806f980 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:07:57.820 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:07:57.820 list of memzone associated elements. size: 599.920898 MiB 00:07:57.820 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:07:57.820 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:57.820 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:07:57.820 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:57.820 element at address: 0x200012df4740 with size: 92.045105 MiB 00:07:57.820 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59354_0 00:07:57.820 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:57.820 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59354_0 00:07:57.820 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:57.820 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59354_0 00:07:57.820 element at address: 0x2000197be900 with size: 20.255615 MiB 00:07:57.820 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:57.820 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:07:57.820 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:57.820 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:57.820 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59354_0 00:07:57.820 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:57.820 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59354 00:07:57.820 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:57.820 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59354 00:07:57.820 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:57.820 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:57.820 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:07:57.820 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:57.820 element at address: 0x200018afde00 with size: 1.008179 MiB 00:07:57.820 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:57.820 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:07:57.820 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:57.820 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:57.820 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59354 00:07:57.820 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:57.820 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59354 00:07:57.820 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:07:57.820 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59354 00:07:57.820 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:07:57.820 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59354 00:07:57.820 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:57.820 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59354 00:07:57.820 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:57.820 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59354 00:07:57.820 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:07:57.820 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:57.820 element at address: 0x200012c72280 with size: 0.500549 MiB 00:07:57.820 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:57.820 element at address: 0x20001967c440 with size: 0.250549 MiB 00:07:57.820 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:57.820 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:57.820 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59354 00:07:57.820 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:57.820 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59354 00:07:57.820 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:07:57.820 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:57.820 element at address: 0x200028064140 with size: 0.023804 MiB 00:07:57.820 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:57.820 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:57.820 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59354 00:07:57.821 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:07:57.821 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:57.821 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:57.821 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59354 00:07:57.821 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:57.821 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59354 00:07:57.821 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:57.821 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59354 00:07:57.821 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:07:57.821 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:57.821 08:26:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:57.821 08:26:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59354 00:07:57.821 08:26:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59354 ']' 00:07:57.821 08:26:36 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59354 00:07:57.821 08:26:36 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:57.821 08:26:36 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.821 08:26:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59354 00:07:57.821 killing process with pid 59354 00:07:57.821 08:26:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.821 08:26:37 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.821 08:26:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59354' 00:07:57.821 08:26:37 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59354 00:07:57.821 08:26:37 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59354 00:08:00.418 ************************************ 00:08:00.418 END TEST dpdk_mem_utility 00:08:00.418 ************************************ 00:08:00.418 00:08:00.418 real 0m3.628s 00:08:00.418 user 0m3.756s 00:08:00.418 sys 0m0.486s 00:08:00.418 08:26:39 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.418 08:26:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:00.418 08:26:39 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:00.418 08:26:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.418 08:26:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.418 08:26:39 -- common/autotest_common.sh@10 -- # set +x 00:08:00.418 ************************************ 00:08:00.418 START TEST event 00:08:00.418 ************************************ 00:08:00.418 08:26:39 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:00.418 * Looking for test storage... 00:08:00.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:00.418 08:26:39 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:00.418 08:26:39 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:00.418 08:26:39 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:00.418 08:26:39 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:00.418 08:26:39 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.418 08:26:39 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.418 08:26:39 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.418 08:26:39 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.418 08:26:39 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.418 08:26:39 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.418 08:26:39 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.418 08:26:39 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.418 08:26:39 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.418 08:26:39 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.418 08:26:39 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.418 08:26:39 event -- scripts/common.sh@344 -- # case "$op" in 00:08:00.418 08:26:39 event -- scripts/common.sh@345 -- # : 1 00:08:00.418 08:26:39 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.418 08:26:39 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.418 08:26:39 event -- scripts/common.sh@365 -- # decimal 1 00:08:00.418 08:26:39 event -- scripts/common.sh@353 -- # local d=1 00:08:00.418 08:26:39 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.418 08:26:39 event -- scripts/common.sh@355 -- # echo 1 00:08:00.418 08:26:39 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.418 08:26:39 event -- scripts/common.sh@366 -- # decimal 2 00:08:00.418 08:26:39 event -- scripts/common.sh@353 -- # local d=2 00:08:00.418 08:26:39 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.418 08:26:39 event -- scripts/common.sh@355 -- # echo 2 00:08:00.418 08:26:39 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.418 08:26:39 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.418 08:26:39 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.418 08:26:39 event -- scripts/common.sh@368 -- # return 0 00:08:00.418 08:26:39 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.418 08:26:39 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:00.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.418 --rc genhtml_branch_coverage=1 00:08:00.418 --rc genhtml_function_coverage=1 00:08:00.418 --rc genhtml_legend=1 00:08:00.418 --rc geninfo_all_blocks=1 00:08:00.418 --rc geninfo_unexecuted_blocks=1 00:08:00.418 00:08:00.418 ' 00:08:00.418 08:26:39 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:00.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.418 --rc genhtml_branch_coverage=1 00:08:00.418 --rc genhtml_function_coverage=1 00:08:00.418 --rc genhtml_legend=1 00:08:00.418 --rc geninfo_all_blocks=1 00:08:00.418 --rc geninfo_unexecuted_blocks=1 00:08:00.418 00:08:00.418 ' 00:08:00.418 08:26:39 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:00.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.418 --rc genhtml_branch_coverage=1 00:08:00.418 --rc genhtml_function_coverage=1 00:08:00.418 --rc genhtml_legend=1 00:08:00.418 --rc geninfo_all_blocks=1 00:08:00.418 --rc geninfo_unexecuted_blocks=1 00:08:00.418 00:08:00.418 ' 00:08:00.418 08:26:39 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:00.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.418 --rc genhtml_branch_coverage=1 00:08:00.418 --rc genhtml_function_coverage=1 00:08:00.418 --rc genhtml_legend=1 00:08:00.418 --rc geninfo_all_blocks=1 00:08:00.418 --rc geninfo_unexecuted_blocks=1 00:08:00.418 00:08:00.418 ' 00:08:00.418 08:26:39 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:00.418 08:26:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:00.418 08:26:39 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:00.418 08:26:39 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:00.418 08:26:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.418 08:26:39 event -- common/autotest_common.sh@10 -- # set +x 00:08:00.418 ************************************ 00:08:00.418 START TEST event_perf 00:08:00.418 ************************************ 00:08:00.418 08:26:39 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:00.418 Running I/O for 1 seconds...[2024-11-19 08:26:39.383124] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:00.418 [2024-11-19 08:26:39.383432] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59458 ] 00:08:00.418 [2024-11-19 08:26:39.571043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.418 [2024-11-19 08:26:39.702635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.418 [2024-11-19 08:26:39.702751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.418 Running I/O for 1 seconds...[2024-11-19 08:26:39.702843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.418 [2024-11-19 08:26:39.702852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.796 00:08:01.796 lcore 0: 181986 00:08:01.796 lcore 1: 181986 00:08:01.796 lcore 2: 181986 00:08:01.796 lcore 3: 181985 00:08:01.796 done. 00:08:01.796 00:08:01.796 real 0m1.608s 00:08:01.796 user 0m4.365s 00:08:01.796 sys 0m0.117s 00:08:01.796 08:26:40 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.796 ************************************ 00:08:01.796 END TEST event_perf 00:08:01.796 ************************************ 00:08:01.796 08:26:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:01.796 08:26:40 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:01.796 08:26:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:01.796 08:26:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.796 08:26:40 event -- common/autotest_common.sh@10 -- # set +x 00:08:01.796 ************************************ 00:08:01.796 START TEST event_reactor 00:08:01.796 ************************************ 00:08:01.796 08:26:40 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:01.796 [2024-11-19 08:26:41.044669] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:01.796 [2024-11-19 08:26:41.044826] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59499 ] 00:08:02.055 [2024-11-19 08:26:41.229209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.055 [2024-11-19 08:26:41.338846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.456 test_start 00:08:03.456 oneshot 00:08:03.456 tick 100 00:08:03.456 tick 100 00:08:03.456 tick 250 00:08:03.456 tick 100 00:08:03.456 tick 100 00:08:03.456 tick 100 00:08:03.456 tick 250 00:08:03.456 tick 500 00:08:03.456 tick 100 00:08:03.456 tick 100 00:08:03.456 tick 250 00:08:03.456 tick 100 00:08:03.456 tick 100 00:08:03.456 test_end 00:08:03.456 00:08:03.456 real 0m1.569s 00:08:03.456 user 0m1.366s 00:08:03.456 sys 0m0.093s 00:08:03.456 08:26:42 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.456 ************************************ 00:08:03.456 END TEST event_reactor 00:08:03.456 ************************************ 00:08:03.456 08:26:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:03.456 08:26:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:03.456 08:26:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:03.456 08:26:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.456 08:26:42 event -- common/autotest_common.sh@10 -- # set +x 00:08:03.456 ************************************ 00:08:03.456 START TEST event_reactor_perf 00:08:03.456 ************************************ 00:08:03.456 08:26:42 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:03.456 [2024-11-19 08:26:42.659388] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:03.456 [2024-11-19 08:26:42.659536] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59542 ] 00:08:03.715 [2024-11-19 08:26:42.836994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.715 [2024-11-19 08:26:42.941024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.091 test_start 00:08:05.091 test_end 00:08:05.091 Performance: 272137 events per second 00:08:05.091 00:08:05.091 real 0m1.549s 00:08:05.091 user 0m1.349s 00:08:05.091 sys 0m0.090s 00:08:05.091 08:26:44 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.091 ************************************ 00:08:05.091 END TEST event_reactor_perf 00:08:05.091 ************************************ 00:08:05.091 08:26:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:05.091 08:26:44 event -- event/event.sh@49 -- # uname -s 00:08:05.091 08:26:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:05.091 08:26:44 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:05.091 08:26:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.091 08:26:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.091 08:26:44 event -- common/autotest_common.sh@10 -- # set +x 00:08:05.091 ************************************ 00:08:05.091 START TEST event_scheduler 00:08:05.091 ************************************ 00:08:05.091 08:26:44 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:05.091 * Looking for test storage... 00:08:05.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:05.091 08:26:44 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:05.091 08:26:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:05.091 08:26:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:05.350 08:26:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:05.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.350 08:26:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:05.350 08:26:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.350 08:26:44 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:05.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.350 --rc genhtml_branch_coverage=1 00:08:05.350 --rc genhtml_function_coverage=1 00:08:05.350 --rc genhtml_legend=1 00:08:05.350 --rc geninfo_all_blocks=1 00:08:05.350 --rc geninfo_unexecuted_blocks=1 00:08:05.350 00:08:05.350 ' 00:08:05.350 08:26:44 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:05.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.350 --rc genhtml_branch_coverage=1 00:08:05.350 --rc genhtml_function_coverage=1 00:08:05.350 --rc genhtml_legend=1 00:08:05.350 --rc geninfo_all_blocks=1 00:08:05.350 --rc geninfo_unexecuted_blocks=1 00:08:05.350 00:08:05.350 ' 00:08:05.350 08:26:44 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:05.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.350 --rc genhtml_branch_coverage=1 00:08:05.350 --rc genhtml_function_coverage=1 00:08:05.350 --rc genhtml_legend=1 00:08:05.350 --rc geninfo_all_blocks=1 00:08:05.350 --rc geninfo_unexecuted_blocks=1 00:08:05.350 00:08:05.350 ' 00:08:05.350 08:26:44 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:05.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.350 --rc genhtml_branch_coverage=1 00:08:05.350 --rc genhtml_function_coverage=1 00:08:05.350 --rc genhtml_legend=1 00:08:05.350 --rc geninfo_all_blocks=1 00:08:05.350 --rc geninfo_unexecuted_blocks=1 00:08:05.350 00:08:05.350 ' 00:08:05.350 08:26:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:05.350 08:26:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59614 00:08:05.350 08:26:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:05.350 08:26:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59614 00:08:05.350 08:26:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:05.350 08:26:44 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59614 ']' 00:08:05.350 08:26:44 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.350 08:26:44 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.350 08:26:44 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.350 08:26:44 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.350 08:26:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:05.350 [2024-11-19 08:26:44.506309] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:05.351 [2024-11-19 08:26:44.506732] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59614 ] 00:08:05.609 [2024-11-19 08:26:44.694342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.609 [2024-11-19 08:26:44.806495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.610 [2024-11-19 08:26:44.806667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.610 [2024-11-19 08:26:44.806740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.610 [2024-11-19 08:26:44.806744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.547 08:26:45 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.547 08:26:45 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:06.547 08:26:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:06.547 08:26:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.547 08:26:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:06.547 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:06.547 POWER: Cannot set governor of lcore 0 to userspace 00:08:06.547 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:06.547 POWER: Cannot set governor of lcore 0 to performance 00:08:06.547 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:06.547 POWER: Cannot set governor of lcore 0 to userspace 00:08:06.547 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:06.547 POWER: Cannot set governor of lcore 0 to userspace 00:08:06.547 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:06.547 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:06.547 POWER: Unable to set Power Management Environment for lcore 0 00:08:06.547 [2024-11-19 08:26:45.522533] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:08:06.547 [2024-11-19 08:26:45.522717] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:08:06.547 [2024-11-19 08:26:45.522743] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:06.547 [2024-11-19 08:26:45.522769] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:06.547 [2024-11-19 08:26:45.522783] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:06.547 [2024-11-19 08:26:45.522797] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:06.547 08:26:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.547 08:26:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:06.547 08:26:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.547 08:26:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:06.547 [2024-11-19 08:26:45.813425] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:06.547 08:26:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.547 08:26:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:06.547 08:26:45 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.547 08:26:45 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.547 08:26:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:06.547 ************************************ 00:08:06.547 START TEST scheduler_create_thread 00:08:06.547 ************************************ 00:08:06.547 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:06.547 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:06.548 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.548 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.806 2 00:08:06.806 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.806 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:06.806 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.806 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.806 3 00:08:06.806 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.806 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:06.806 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.806 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.806 4 00:08:06.806 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.806 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:06.806 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.806 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.806 5 00:08:06.806 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.807 6 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.807 7 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.807 8 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.807 9 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.807 10 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.807 08:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:07.742 ************************************ 00:08:07.742 END TEST scheduler_create_thread 00:08:07.742 ************************************ 00:08:07.742 08:26:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.742 00:08:07.742 real 0m1.176s 00:08:07.742 user 0m0.017s 00:08:07.742 sys 0m0.002s 00:08:07.742 08:26:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.742 08:26:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:08.000 08:26:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:08.000 08:26:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59614 00:08:08.000 08:26:47 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59614 ']' 00:08:08.000 08:26:47 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59614 00:08:08.000 08:26:47 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:08.000 08:26:47 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.000 08:26:47 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59614 00:08:08.000 killing process with pid 59614 00:08:08.000 08:26:47 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:08.000 08:26:47 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:08.000 08:26:47 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59614' 00:08:08.000 08:26:47 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59614 00:08:08.000 08:26:47 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59614 00:08:08.258 [2024-11-19 08:26:47.480934] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:09.635 00:08:09.635 real 0m4.298s 00:08:09.635 user 0m7.352s 00:08:09.635 sys 0m0.460s 00:08:09.635 ************************************ 00:08:09.635 END TEST event_scheduler 00:08:09.635 ************************************ 00:08:09.635 08:26:48 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.635 08:26:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:09.635 08:26:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:09.635 08:26:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:09.635 08:26:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.635 08:26:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.635 08:26:48 event -- common/autotest_common.sh@10 -- # set +x 00:08:09.635 ************************************ 00:08:09.635 START TEST app_repeat 00:08:09.635 ************************************ 00:08:09.635 08:26:48 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:09.635 08:26:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.635 08:26:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:09.635 08:26:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:09.635 08:26:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:09.635 08:26:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:09.635 08:26:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:09.635 08:26:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:09.635 Process app_repeat pid: 59709 00:08:09.635 08:26:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59709 00:08:09.635 08:26:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:09.635 08:26:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:09.635 08:26:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59709' 00:08:09.635 08:26:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:09.635 spdk_app_start Round 0 00:08:09.636 08:26:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:09.636 08:26:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59709 /var/tmp/spdk-nbd.sock 00:08:09.636 08:26:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59709 ']' 00:08:09.636 08:26:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:09.636 08:26:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:09.636 08:26:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:09.636 08:26:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.636 08:26:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:09.636 [2024-11-19 08:26:48.624980] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:09.636 [2024-11-19 08:26:48.625124] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59709 ] 00:08:09.636 [2024-11-19 08:26:48.802008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:09.636 [2024-11-19 08:26:48.911349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.636 [2024-11-19 08:26:48.911348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.637 08:26:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.637 08:26:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:10.637 08:26:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:10.918 Malloc0 00:08:10.918 08:26:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:11.176 Malloc1 00:08:11.176 08:26:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:11.176 08:26:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:11.434 /dev/nbd0 00:08:11.434 08:26:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:11.434 08:26:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:11.434 1+0 records in 00:08:11.434 1+0 records out 00:08:11.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031696 s, 12.9 MB/s 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:11.434 08:26:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:11.434 08:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.434 08:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:11.434 08:26:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:11.693 /dev/nbd1 00:08:11.952 08:26:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:11.952 08:26:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:11.952 08:26:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:11.952 08:26:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:11.952 08:26:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:11.952 08:26:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:11.952 08:26:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:11.952 08:26:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:11.952 08:26:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:11.952 08:26:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:11.952 08:26:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:11.952 1+0 records in 00:08:11.952 1+0 records out 00:08:11.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360036 s, 11.4 MB/s 00:08:11.952 08:26:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.952 08:26:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:11.952 08:26:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:11.952 08:26:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:11.952 08:26:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:11.952 08:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.952 08:26:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:11.952 08:26:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:11.952 08:26:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.952 08:26:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:12.211 { 00:08:12.211 "nbd_device": "/dev/nbd0", 00:08:12.211 "bdev_name": "Malloc0" 00:08:12.211 }, 00:08:12.211 { 00:08:12.211 "nbd_device": "/dev/nbd1", 00:08:12.211 "bdev_name": "Malloc1" 00:08:12.211 } 00:08:12.211 ]' 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:12.211 { 00:08:12.211 "nbd_device": "/dev/nbd0", 00:08:12.211 "bdev_name": "Malloc0" 00:08:12.211 }, 00:08:12.211 { 00:08:12.211 "nbd_device": "/dev/nbd1", 00:08:12.211 "bdev_name": "Malloc1" 00:08:12.211 } 00:08:12.211 ]' 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:12.211 /dev/nbd1' 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:12.211 /dev/nbd1' 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:12.211 256+0 records in 00:08:12.211 256+0 records out 00:08:12.211 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590189 s, 178 MB/s 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.211 08:26:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:12.211 256+0 records in 00:08:12.211 256+0 records out 00:08:12.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303558 s, 34.5 MB/s 00:08:12.212 08:26:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.212 08:26:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:12.212 256+0 records in 00:08:12.212 256+0 records out 00:08:12.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.034 s, 30.8 MB/s 00:08:12.212 08:26:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:12.212 08:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:12.212 08:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:12.212 08:26:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:12.212 08:26:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:12.212 08:26:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:12.212 08:26:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:12.212 08:26:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.212 08:26:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:12.469 08:26:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.469 08:26:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:12.470 08:26:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:12.470 08:26:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:12.470 08:26:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.470 08:26:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:12.470 08:26:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:12.470 08:26:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:12.470 08:26:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.470 08:26:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:12.727 08:26:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:12.727 08:26:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:12.727 08:26:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:12.727 08:26:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.727 08:26:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.727 08:26:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:12.727 08:26:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:12.727 08:26:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.727 08:26:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.727 08:26:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:12.985 08:26:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:12.985 08:26:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:12.985 08:26:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:12.985 08:26:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.985 08:26:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.985 08:26:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:12.985 08:26:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:12.985 08:26:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.985 08:26:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:12.985 08:26:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.985 08:26:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.243 08:26:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:13.243 08:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:13.243 08:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:13.243 08:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:13.243 08:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:13.243 08:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:13.243 08:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:13.243 08:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:13.243 08:26:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:13.243 08:26:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:13.243 08:26:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:13.243 08:26:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:13.243 08:26:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:13.809 08:26:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:14.742 [2024-11-19 08:26:53.967931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:15.000 [2024-11-19 08:26:54.068519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.000 [2024-11-19 08:26:54.068530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.000 [2024-11-19 08:26:54.234172] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:15.000 [2024-11-19 08:26:54.234280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:16.901 spdk_app_start Round 1 00:08:16.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:16.901 08:26:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:16.901 08:26:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:16.901 08:26:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59709 /var/tmp/spdk-nbd.sock 00:08:16.901 08:26:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59709 ']' 00:08:16.901 08:26:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:16.901 08:26:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.901 08:26:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:16.901 08:26:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.901 08:26:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:17.159 08:26:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.159 08:26:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:17.159 08:26:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:17.417 Malloc0 00:08:17.417 08:26:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:17.984 Malloc1 00:08:17.984 08:26:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:17.984 08:26:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.984 08:26:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:17.985 08:26:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:17.985 08:26:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.985 08:26:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:17.985 08:26:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:17.985 08:26:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.985 08:26:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:17.985 08:26:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:17.985 08:26:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.985 08:26:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:17.985 08:26:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:17.985 08:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:17.985 08:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:17.985 08:26:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:18.243 /dev/nbd0 00:08:18.243 08:26:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:18.243 08:26:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:18.243 1+0 records in 00:08:18.243 1+0 records out 00:08:18.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282141 s, 14.5 MB/s 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:18.243 08:26:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:18.243 08:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.243 08:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:18.243 08:26:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:18.501 /dev/nbd1 00:08:18.501 08:26:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:18.501 08:26:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:18.501 1+0 records in 00:08:18.501 1+0 records out 00:08:18.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316353 s, 12.9 MB/s 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:18.501 08:26:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:18.501 08:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.501 08:26:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:18.501 08:26:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:18.501 08:26:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.501 08:26:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:19.067 { 00:08:19.067 "nbd_device": "/dev/nbd0", 00:08:19.067 "bdev_name": "Malloc0" 00:08:19.067 }, 00:08:19.067 { 00:08:19.067 "nbd_device": "/dev/nbd1", 00:08:19.067 "bdev_name": "Malloc1" 00:08:19.067 } 00:08:19.067 ]' 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:19.067 { 00:08:19.067 "nbd_device": "/dev/nbd0", 00:08:19.067 "bdev_name": "Malloc0" 00:08:19.067 }, 00:08:19.067 { 00:08:19.067 "nbd_device": "/dev/nbd1", 00:08:19.067 "bdev_name": "Malloc1" 00:08:19.067 } 00:08:19.067 ]' 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:19.067 /dev/nbd1' 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:19.067 /dev/nbd1' 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:19.067 256+0 records in 00:08:19.067 256+0 records out 00:08:19.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00778571 s, 135 MB/s 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:19.067 256+0 records in 00:08:19.067 256+0 records out 00:08:19.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249693 s, 42.0 MB/s 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:19.067 256+0 records in 00:08:19.067 256+0 records out 00:08:19.067 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288397 s, 36.4 MB/s 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.067 08:26:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:19.068 08:26:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:19.068 08:26:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:19.068 08:26:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.068 08:26:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:19.326 08:26:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:19.326 08:26:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:19.326 08:26:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:19.326 08:26:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.326 08:26:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.326 08:26:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:19.326 08:26:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:19.326 08:26:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.326 08:26:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.326 08:26:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:19.584 08:26:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:19.585 08:26:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:19.585 08:26:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:19.585 08:26:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:19.585 08:26:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:19.585 08:26:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:19.585 08:26:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:19.585 08:26:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:19.585 08:26:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:19.585 08:26:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.585 08:26:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:19.843 08:26:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:19.843 08:26:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:19.843 08:26:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:20.099 08:26:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:20.099 08:26:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:20.099 08:26:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:20.099 08:26:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:20.099 08:26:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:20.099 08:26:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:20.099 08:26:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:20.099 08:26:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:20.099 08:26:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:20.099 08:26:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:20.665 08:26:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:21.628 [2024-11-19 08:27:00.663583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:21.628 [2024-11-19 08:27:00.766281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.628 [2024-11-19 08:27:00.766286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.907 [2024-11-19 08:27:00.933950] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:21.907 [2024-11-19 08:27:00.934062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:23.823 spdk_app_start Round 2 00:08:23.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:23.823 08:27:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:23.823 08:27:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:23.823 08:27:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59709 /var/tmp/spdk-nbd.sock 00:08:23.823 08:27:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59709 ']' 00:08:23.823 08:27:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:23.823 08:27:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.823 08:27:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:23.823 08:27:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.823 08:27:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:23.823 08:27:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.823 08:27:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:23.823 08:27:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.082 Malloc0 00:08:24.341 08:27:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.601 Malloc1 00:08:24.601 08:27:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.601 08:27:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:24.860 /dev/nbd0 00:08:24.860 08:27:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:24.860 08:27:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:24.860 1+0 records in 00:08:24.860 1+0 records out 00:08:24.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409299 s, 10.0 MB/s 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:24.860 08:27:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:24.861 08:27:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:24.861 08:27:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.861 08:27:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:25.120 /dev/nbd1 00:08:25.120 08:27:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:25.120 08:27:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:25.120 1+0 records in 00:08:25.120 1+0 records out 00:08:25.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358823 s, 11.4 MB/s 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:25.120 08:27:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:25.120 08:27:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.120 08:27:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.120 08:27:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.120 08:27:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.120 08:27:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.690 08:27:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:25.690 { 00:08:25.690 "nbd_device": "/dev/nbd0", 00:08:25.690 "bdev_name": "Malloc0" 00:08:25.690 }, 00:08:25.690 { 00:08:25.690 "nbd_device": "/dev/nbd1", 00:08:25.690 "bdev_name": "Malloc1" 00:08:25.690 } 00:08:25.690 ]' 00:08:25.690 08:27:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.690 08:27:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:25.690 { 00:08:25.690 "nbd_device": "/dev/nbd0", 00:08:25.690 "bdev_name": "Malloc0" 00:08:25.690 }, 00:08:25.690 { 00:08:25.690 "nbd_device": "/dev/nbd1", 00:08:25.690 "bdev_name": "Malloc1" 00:08:25.690 } 00:08:25.690 ]' 00:08:25.690 08:27:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:25.690 /dev/nbd1' 00:08:25.690 08:27:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:25.691 /dev/nbd1' 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:25.691 256+0 records in 00:08:25.691 256+0 records out 00:08:25.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00830184 s, 126 MB/s 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:25.691 256+0 records in 00:08:25.691 256+0 records out 00:08:25.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294482 s, 35.6 MB/s 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:25.691 256+0 records in 00:08:25.691 256+0 records out 00:08:25.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289851 s, 36.2 MB/s 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.691 08:27:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:25.949 08:27:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:25.949 08:27:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:25.949 08:27:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:25.949 08:27:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.949 08:27:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.949 08:27:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:25.949 08:27:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:25.949 08:27:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.949 08:27:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.949 08:27:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:26.517 08:27:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:26.517 08:27:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:26.517 08:27:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:26.517 08:27:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.517 08:27:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.517 08:27:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:26.517 08:27:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:26.517 08:27:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.517 08:27:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.517 08:27:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.517 08:27:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.775 08:27:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.775 08:27:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.775 08:27:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.775 08:27:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.775 08:27:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.775 08:27:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.775 08:27:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:26.775 08:27:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.775 08:27:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.775 08:27:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:26.775 08:27:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:26.775 08:27:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:26.775 08:27:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:27.033 08:27:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:28.408 [2024-11-19 08:27:07.300605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:28.408 [2024-11-19 08:27:07.402312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.408 [2024-11-19 08:27:07.402325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.408 [2024-11-19 08:27:07.570230] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:28.408 [2024-11-19 08:27:07.570381] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:30.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:30.307 08:27:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59709 /var/tmp/spdk-nbd.sock 00:08:30.307 08:27:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59709 ']' 00:08:30.307 08:27:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:30.307 08:27:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.307 08:27:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:30.307 08:27:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.307 08:27:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:30.307 08:27:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.307 08:27:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:30.307 08:27:09 event.app_repeat -- event/event.sh@39 -- # killprocess 59709 00:08:30.307 08:27:09 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59709 ']' 00:08:30.307 08:27:09 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59709 00:08:30.307 08:27:09 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:30.307 08:27:09 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.307 08:27:09 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59709 00:08:30.566 killing process with pid 59709 00:08:30.566 08:27:09 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.566 08:27:09 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.566 08:27:09 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59709' 00:08:30.566 08:27:09 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59709 00:08:30.566 08:27:09 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59709 00:08:31.503 spdk_app_start is called in Round 0. 00:08:31.503 Shutdown signal received, stop current app iteration 00:08:31.503 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:08:31.503 spdk_app_start is called in Round 1. 00:08:31.503 Shutdown signal received, stop current app iteration 00:08:31.503 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:08:31.503 spdk_app_start is called in Round 2. 00:08:31.503 Shutdown signal received, stop current app iteration 00:08:31.503 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 reinitialization... 00:08:31.503 spdk_app_start is called in Round 3. 00:08:31.503 Shutdown signal received, stop current app iteration 00:08:31.503 ************************************ 00:08:31.503 END TEST app_repeat 00:08:31.503 ************************************ 00:08:31.503 08:27:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:31.503 08:27:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:31.503 00:08:31.503 real 0m21.962s 00:08:31.503 user 0m49.466s 00:08:31.503 sys 0m2.772s 00:08:31.503 08:27:10 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.503 08:27:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:31.503 08:27:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:31.503 08:27:10 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:31.503 08:27:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.503 08:27:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.503 08:27:10 event -- common/autotest_common.sh@10 -- # set +x 00:08:31.503 ************************************ 00:08:31.503 START TEST cpu_locks 00:08:31.503 ************************************ 00:08:31.503 08:27:10 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:31.503 * Looking for test storage... 00:08:31.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:31.503 08:27:10 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.503 08:27:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.503 08:27:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.503 08:27:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:31.503 08:27:10 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.504 08:27:10 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:31.504 08:27:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:31.504 08:27:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.504 08:27:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:31.504 08:27:10 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.504 08:27:10 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.504 08:27:10 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.504 08:27:10 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:31.504 08:27:10 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.504 08:27:10 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:31.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.504 --rc genhtml_branch_coverage=1 00:08:31.504 --rc genhtml_function_coverage=1 00:08:31.504 --rc genhtml_legend=1 00:08:31.504 --rc geninfo_all_blocks=1 00:08:31.504 --rc geninfo_unexecuted_blocks=1 00:08:31.504 00:08:31.504 ' 00:08:31.504 08:27:10 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:31.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.504 --rc genhtml_branch_coverage=1 00:08:31.504 --rc genhtml_function_coverage=1 00:08:31.504 --rc genhtml_legend=1 00:08:31.504 --rc geninfo_all_blocks=1 00:08:31.504 --rc geninfo_unexecuted_blocks=1 00:08:31.504 00:08:31.504 ' 00:08:31.504 08:27:10 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:31.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.504 --rc genhtml_branch_coverage=1 00:08:31.504 --rc genhtml_function_coverage=1 00:08:31.504 --rc genhtml_legend=1 00:08:31.504 --rc geninfo_all_blocks=1 00:08:31.504 --rc geninfo_unexecuted_blocks=1 00:08:31.504 00:08:31.504 ' 00:08:31.504 08:27:10 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:31.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.504 --rc genhtml_branch_coverage=1 00:08:31.504 --rc genhtml_function_coverage=1 00:08:31.504 --rc genhtml_legend=1 00:08:31.504 --rc geninfo_all_blocks=1 00:08:31.504 --rc geninfo_unexecuted_blocks=1 00:08:31.504 00:08:31.504 ' 00:08:31.504 08:27:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:31.504 08:27:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:31.504 08:27:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:31.504 08:27:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:31.504 08:27:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.504 08:27:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.504 08:27:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:31.504 ************************************ 00:08:31.504 START TEST default_locks 00:08:31.504 ************************************ 00:08:31.504 08:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:31.504 08:27:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60178 00:08:31.504 08:27:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60178 00:08:31.504 08:27:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:31.504 08:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60178 ']' 00:08:31.504 08:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.504 08:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.504 08:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.504 08:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.504 08:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:31.763 [2024-11-19 08:27:10.884398] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:31.763 [2024-11-19 08:27:10.884779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60178 ] 00:08:32.023 [2024-11-19 08:27:11.066179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.023 [2024-11-19 08:27:11.167418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.959 08:27:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.959 08:27:11 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:32.959 08:27:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60178 00:08:32.959 08:27:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:32.959 08:27:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60178 00:08:33.218 08:27:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60178 00:08:33.218 08:27:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60178 ']' 00:08:33.218 08:27:12 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60178 00:08:33.218 08:27:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:33.218 08:27:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.218 08:27:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60178 00:08:33.218 killing process with pid 60178 00:08:33.218 08:27:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.218 08:27:12 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.218 08:27:12 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60178' 00:08:33.218 08:27:12 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60178 00:08:33.218 08:27:12 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60178 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60178 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60178 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60178 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60178 ']' 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.753 ERROR: process (pid: 60178) is no longer running 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:35.753 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60178) - No such process 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:35.753 00:08:35.753 real 0m3.732s 00:08:35.753 user 0m3.869s 00:08:35.753 sys 0m0.603s 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.753 08:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:35.753 ************************************ 00:08:35.753 END TEST default_locks 00:08:35.753 ************************************ 00:08:35.753 08:27:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:35.753 08:27:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.753 08:27:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.753 08:27:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:35.753 ************************************ 00:08:35.753 START TEST default_locks_via_rpc 00:08:35.753 ************************************ 00:08:35.753 08:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:35.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.753 08:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60253 00:08:35.753 08:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:35.753 08:27:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60253 00:08:35.753 08:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60253 ']' 00:08:35.753 08:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.753 08:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.753 08:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.753 08:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.753 08:27:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.753 [2024-11-19 08:27:14.686977] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:35.753 [2024-11-19 08:27:14.687414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60253 ] 00:08:35.753 [2024-11-19 08:27:14.876788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.753 [2024-11-19 08:27:15.003954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60253 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60253 00:08:36.690 08:27:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:37.257 08:27:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60253 00:08:37.257 08:27:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60253 ']' 00:08:37.257 08:27:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60253 00:08:37.257 08:27:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:37.257 08:27:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.257 08:27:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60253 00:08:37.257 killing process with pid 60253 00:08:37.257 08:27:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.257 08:27:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.258 08:27:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60253' 00:08:37.258 08:27:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60253 00:08:37.258 08:27:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60253 00:08:39.234 ************************************ 00:08:39.234 END TEST default_locks_via_rpc 00:08:39.234 ************************************ 00:08:39.234 00:08:39.234 real 0m3.834s 00:08:39.234 user 0m3.927s 00:08:39.234 sys 0m0.649s 00:08:39.234 08:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.234 08:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.234 08:27:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:39.234 08:27:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.234 08:27:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.234 08:27:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:39.234 ************************************ 00:08:39.234 START TEST non_locking_app_on_locked_coremask 00:08:39.234 ************************************ 00:08:39.234 08:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:39.234 08:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60327 00:08:39.234 08:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60327 /var/tmp/spdk.sock 00:08:39.234 08:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60327 ']' 00:08:39.234 08:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:39.234 08:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.234 08:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.234 08:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.234 08:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.234 08:27:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:39.492 [2024-11-19 08:27:18.549159] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:39.492 [2024-11-19 08:27:18.549347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60327 ] 00:08:39.492 [2024-11-19 08:27:18.732898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.751 [2024-11-19 08:27:18.844118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.692 08:27:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.692 08:27:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:40.692 08:27:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:40.692 08:27:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60343 00:08:40.692 08:27:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60343 /var/tmp/spdk2.sock 00:08:40.692 08:27:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60343 ']' 00:08:40.692 08:27:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:40.693 08:27:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.693 08:27:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:40.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:40.693 08:27:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.693 08:27:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:40.693 [2024-11-19 08:27:19.747036] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:40.693 [2024-11-19 08:27:19.747435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60343 ] 00:08:40.693 [2024-11-19 08:27:19.946246] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:40.693 [2024-11-19 08:27:19.946372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.951 [2024-11-19 08:27:20.169447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.856 08:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.856 08:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:42.856 08:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60327 00:08:42.856 08:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:42.856 08:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60327 00:08:43.422 08:27:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60327 00:08:43.422 08:27:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60327 ']' 00:08:43.423 08:27:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60327 00:08:43.423 08:27:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:43.423 08:27:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.423 08:27:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60327 00:08:43.423 killing process with pid 60327 00:08:43.423 08:27:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.423 08:27:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.423 08:27:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60327' 00:08:43.423 08:27:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60327 00:08:43.423 08:27:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60327 00:08:47.613 08:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60343 00:08:47.613 08:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60343 ']' 00:08:47.613 08:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60343 00:08:47.613 08:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:47.613 08:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.613 08:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60343 00:08:47.613 killing process with pid 60343 00:08:47.613 08:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.613 08:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.613 08:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60343' 00:08:47.613 08:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60343 00:08:47.613 08:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60343 00:08:50.147 ************************************ 00:08:50.147 END TEST non_locking_app_on_locked_coremask 00:08:50.147 ************************************ 00:08:50.147 00:08:50.147 real 0m10.460s 00:08:50.147 user 0m11.159s 00:08:50.147 sys 0m1.256s 00:08:50.147 08:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.147 08:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:50.147 08:27:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:50.147 08:27:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:50.147 08:27:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.147 08:27:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:50.147 ************************************ 00:08:50.147 START TEST locking_app_on_unlocked_coremask 00:08:50.147 ************************************ 00:08:50.147 08:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:50.147 08:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60478 00:08:50.147 08:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:50.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.147 08:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60478 /var/tmp/spdk.sock 00:08:50.147 08:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60478 ']' 00:08:50.147 08:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.147 08:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.147 08:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.147 08:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.147 08:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:50.147 [2024-11-19 08:27:29.055880] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:50.147 [2024-11-19 08:27:29.056115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60478 ] 00:08:50.147 [2024-11-19 08:27:29.232505] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:50.147 [2024-11-19 08:27:29.232559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.147 [2024-11-19 08:27:29.335536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:51.083 08:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.083 08:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:51.083 08:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60494 00:08:51.083 08:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:51.083 08:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60494 /var/tmp/spdk2.sock 00:08:51.083 08:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60494 ']' 00:08:51.083 08:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:51.084 08:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.084 08:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:51.084 08:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.084 08:27:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:51.084 [2024-11-19 08:27:30.223440] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:08:51.084 [2024-11-19 08:27:30.223633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60494 ] 00:08:51.342 [2024-11-19 08:27:30.424747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.342 [2024-11-19 08:27:30.631022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.243 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.243 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:53.243 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60494 00:08:53.243 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:53.243 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60494 00:08:53.810 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60478 00:08:53.810 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60478 ']' 00:08:53.810 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60478 00:08:53.810 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:53.810 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.810 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60478 00:08:53.810 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.810 killing process with pid 60478 00:08:53.810 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.810 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60478' 00:08:53.810 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60478 00:08:53.810 08:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60478 00:08:58.032 08:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60494 00:08:58.032 08:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60494 ']' 00:08:58.032 08:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60494 00:08:58.032 08:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:58.032 08:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.032 08:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60494 00:08:58.032 08:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.032 08:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.032 killing process with pid 60494 00:08:58.032 08:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60494' 00:08:58.032 08:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60494 00:08:58.032 08:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60494 00:08:59.936 00:08:59.936 real 0m10.245s 00:08:59.936 user 0m10.913s 00:08:59.936 sys 0m1.145s 00:08:59.936 08:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.936 ************************************ 00:08:59.936 END TEST locking_app_on_unlocked_coremask 00:08:59.936 08:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:59.936 ************************************ 00:08:59.936 08:27:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:59.936 08:27:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.936 08:27:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.936 08:27:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:59.936 ************************************ 00:08:59.936 START TEST locking_app_on_locked_coremask 00:08:59.936 ************************************ 00:08:59.936 08:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:59.936 08:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60630 00:08:59.936 08:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60630 /var/tmp/spdk.sock 00:08:59.936 08:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60630 ']' 00:08:59.936 08:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:59.936 08:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.936 08:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.936 08:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.936 08:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.936 08:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:00.193 [2024-11-19 08:27:39.330760] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:00.193 [2024-11-19 08:27:39.330916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60630 ] 00:09:00.452 [2024-11-19 08:27:39.504063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.452 [2024-11-19 08:27:39.606627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60646 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60646 /var/tmp/spdk2.sock 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60646 /var/tmp/spdk2.sock 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60646 /var/tmp/spdk2.sock 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60646 ']' 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.388 08:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:01.388 [2024-11-19 08:27:40.523991] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:01.388 [2024-11-19 08:27:40.524208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60646 ] 00:09:01.647 [2024-11-19 08:27:40.742184] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60630 has claimed it. 00:09:01.647 [2024-11-19 08:27:40.742285] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:02.214 ERROR: process (pid: 60646) is no longer running 00:09:02.214 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60646) - No such process 00:09:02.214 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.214 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:02.214 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:02.214 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:02.214 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:02.214 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:02.214 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60630 00:09:02.214 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:02.214 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60630 00:09:02.473 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60630 00:09:02.473 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60630 ']' 00:09:02.473 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60630 00:09:02.473 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:02.473 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.473 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60630 00:09:02.473 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.473 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.473 killing process with pid 60630 00:09:02.473 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60630' 00:09:02.473 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60630 00:09:02.473 08:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60630 00:09:05.008 00:09:05.008 real 0m4.467s 00:09:05.008 user 0m5.027s 00:09:05.008 sys 0m0.725s 00:09:05.008 08:27:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.008 08:27:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:05.008 ************************************ 00:09:05.008 END TEST locking_app_on_locked_coremask 00:09:05.008 ************************************ 00:09:05.008 08:27:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:05.008 08:27:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.008 08:27:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.008 08:27:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:05.008 ************************************ 00:09:05.008 START TEST locking_overlapped_coremask 00:09:05.008 ************************************ 00:09:05.008 08:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:05.008 08:27:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60716 00:09:05.008 08:27:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60716 /var/tmp/spdk.sock 00:09:05.008 08:27:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:05.008 08:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60716 ']' 00:09:05.008 08:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.008 08:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.008 08:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.008 08:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.008 08:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:05.008 [2024-11-19 08:27:43.859778] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:05.008 [2024-11-19 08:27:43.859952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60716 ] 00:09:05.008 [2024-11-19 08:27:44.095860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:05.008 [2024-11-19 08:27:44.222443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.008 [2024-11-19 08:27:44.222511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.008 [2024-11-19 08:27:44.223167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60734 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60734 /var/tmp/spdk2.sock 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60734 /var/tmp/spdk2.sock 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60734 /var/tmp/spdk2.sock 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60734 ']' 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:05.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.942 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:05.942 [2024-11-19 08:27:45.123152] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:05.942 [2024-11-19 08:27:45.123296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60734 ] 00:09:06.200 [2024-11-19 08:27:45.319954] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60716 has claimed it. 00:09:06.200 [2024-11-19 08:27:45.323916] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:06.807 ERROR: process (pid: 60734) is no longer running 00:09:06.807 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60734) - No such process 00:09:06.807 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.807 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:06.807 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:06.807 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:06.807 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:06.807 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:06.807 08:27:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:06.807 08:27:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:06.807 08:27:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:06.807 08:27:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:06.807 08:27:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60716 00:09:06.807 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60716 ']' 00:09:06.807 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60716 00:09:06.808 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:06.808 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.808 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60716 00:09:06.808 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.808 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.808 killing process with pid 60716 00:09:06.808 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60716' 00:09:06.808 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60716 00:09:06.808 08:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60716 00:09:08.737 00:09:08.737 real 0m4.290s 00:09:08.737 user 0m11.816s 00:09:08.737 sys 0m0.578s 00:09:08.737 08:27:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.737 ************************************ 00:09:08.737 END TEST locking_overlapped_coremask 00:09:08.737 08:27:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.737 ************************************ 00:09:08.996 08:27:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:08.996 08:27:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.996 08:27:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.996 08:27:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.996 ************************************ 00:09:08.996 START TEST locking_overlapped_coremask_via_rpc 00:09:08.996 ************************************ 00:09:08.996 08:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:08.996 08:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60798 00:09:08.996 08:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60798 /var/tmp/spdk.sock 00:09:08.996 08:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60798 ']' 00:09:08.996 08:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.996 08:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:08.996 08:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.996 08:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.996 08:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.996 08:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.996 [2024-11-19 08:27:48.225235] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:08.996 [2024-11-19 08:27:48.225936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60798 ] 00:09:09.254 [2024-11-19 08:27:48.406897] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:09.254 [2024-11-19 08:27:48.406958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:09.254 [2024-11-19 08:27:48.512982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.254 [2024-11-19 08:27:48.513122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.254 [2024-11-19 08:27:48.513134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.189 08:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.189 08:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:10.189 08:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60816 00:09:10.189 08:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60816 /var/tmp/spdk2.sock 00:09:10.189 08:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:10.189 08:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60816 ']' 00:09:10.189 08:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:10.189 08:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:10.189 08:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:10.189 08:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.189 08:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.189 [2024-11-19 08:27:49.466652] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:10.189 [2024-11-19 08:27:49.467205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60816 ] 00:09:10.447 [2024-11-19 08:27:49.664795] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:10.447 [2024-11-19 08:27:49.664861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:10.707 [2024-11-19 08:27:49.880885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.707 [2024-11-19 08:27:49.881021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.707 [2024-11-19 08:27:49.881037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.610 [2024-11-19 08:27:51.451786] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60798 has claimed it. 00:09:12.610 request: 00:09:12.610 { 00:09:12.610 "method": "framework_enable_cpumask_locks", 00:09:12.610 "req_id": 1 00:09:12.610 } 00:09:12.610 Got JSON-RPC error response 00:09:12.610 response: 00:09:12.610 { 00:09:12.610 "code": -32603, 00:09:12.610 "message": "Failed to claim CPU core: 2" 00:09:12.610 } 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60798 /var/tmp/spdk.sock 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60798 ']' 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60816 /var/tmp/spdk2.sock 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60816 ']' 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:12.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.610 08:27:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.869 ************************************ 00:09:12.869 END TEST locking_overlapped_coremask_via_rpc 00:09:12.869 ************************************ 00:09:12.869 08:27:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.869 08:27:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:12.869 08:27:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:12.869 08:27:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:12.869 08:27:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:12.869 08:27:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:12.869 00:09:12.869 real 0m4.072s 00:09:12.869 user 0m1.749s 00:09:12.869 sys 0m0.188s 00:09:12.869 08:27:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.869 08:27:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.127 08:27:52 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:13.127 08:27:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60798 ]] 00:09:13.127 08:27:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60798 00:09:13.127 08:27:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60798 ']' 00:09:13.127 08:27:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60798 00:09:13.127 08:27:52 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:13.127 08:27:52 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.127 08:27:52 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60798 00:09:13.127 killing process with pid 60798 00:09:13.127 08:27:52 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.127 08:27:52 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.127 08:27:52 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60798' 00:09:13.127 08:27:52 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60798 00:09:13.127 08:27:52 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60798 00:09:15.054 08:27:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60816 ]] 00:09:15.055 08:27:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60816 00:09:15.055 08:27:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60816 ']' 00:09:15.055 08:27:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60816 00:09:15.055 08:27:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:15.055 08:27:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.055 08:27:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60816 00:09:15.055 killing process with pid 60816 00:09:15.055 08:27:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:15.055 08:27:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:15.055 08:27:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60816' 00:09:15.055 08:27:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60816 00:09:15.055 08:27:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60816 00:09:17.588 08:27:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:17.588 08:27:56 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:17.588 08:27:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60798 ]] 00:09:17.588 08:27:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60798 00:09:17.588 08:27:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60798 ']' 00:09:17.588 08:27:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60798 00:09:17.588 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60798) - No such process 00:09:17.588 Process with pid 60798 is not found 00:09:17.588 08:27:56 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60798 is not found' 00:09:17.588 08:27:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60816 ]] 00:09:17.588 08:27:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60816 00:09:17.588 08:27:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60816 ']' 00:09:17.588 08:27:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60816 00:09:17.588 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60816) - No such process 00:09:17.588 Process with pid 60816 is not found 00:09:17.588 08:27:56 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60816 is not found' 00:09:17.588 08:27:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:17.588 00:09:17.588 real 0m45.830s 00:09:17.588 user 1m20.062s 00:09:17.588 sys 0m6.075s 00:09:17.588 08:27:56 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.588 08:27:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:17.588 ************************************ 00:09:17.588 END TEST cpu_locks 00:09:17.588 ************************************ 00:09:17.588 00:09:17.588 real 1m17.318s 00:09:17.588 user 2m24.166s 00:09:17.588 sys 0m9.869s 00:09:17.588 08:27:56 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.588 08:27:56 event -- common/autotest_common.sh@10 -- # set +x 00:09:17.588 ************************************ 00:09:17.588 END TEST event 00:09:17.588 ************************************ 00:09:17.589 08:27:56 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:17.589 08:27:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.589 08:27:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.589 08:27:56 -- common/autotest_common.sh@10 -- # set +x 00:09:17.589 ************************************ 00:09:17.589 START TEST thread 00:09:17.589 ************************************ 00:09:17.589 08:27:56 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:17.589 * Looking for test storage... 00:09:17.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:17.589 08:27:56 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:17.589 08:27:56 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:17.589 08:27:56 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:09:17.589 08:27:56 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:17.589 08:27:56 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.589 08:27:56 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.589 08:27:56 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.589 08:27:56 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.589 08:27:56 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.589 08:27:56 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.589 08:27:56 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.589 08:27:56 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.589 08:27:56 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.589 08:27:56 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.589 08:27:56 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.589 08:27:56 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:17.589 08:27:56 thread -- scripts/common.sh@345 -- # : 1 00:09:17.589 08:27:56 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.589 08:27:56 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.589 08:27:56 thread -- scripts/common.sh@365 -- # decimal 1 00:09:17.589 08:27:56 thread -- scripts/common.sh@353 -- # local d=1 00:09:17.589 08:27:56 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.589 08:27:56 thread -- scripts/common.sh@355 -- # echo 1 00:09:17.589 08:27:56 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.589 08:27:56 thread -- scripts/common.sh@366 -- # decimal 2 00:09:17.589 08:27:56 thread -- scripts/common.sh@353 -- # local d=2 00:09:17.589 08:27:56 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.589 08:27:56 thread -- scripts/common.sh@355 -- # echo 2 00:09:17.589 08:27:56 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.589 08:27:56 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.589 08:27:56 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.589 08:27:56 thread -- scripts/common.sh@368 -- # return 0 00:09:17.589 08:27:56 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.589 08:27:56 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:17.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.589 --rc genhtml_branch_coverage=1 00:09:17.589 --rc genhtml_function_coverage=1 00:09:17.589 --rc genhtml_legend=1 00:09:17.589 --rc geninfo_all_blocks=1 00:09:17.589 --rc geninfo_unexecuted_blocks=1 00:09:17.589 00:09:17.589 ' 00:09:17.589 08:27:56 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:17.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.589 --rc genhtml_branch_coverage=1 00:09:17.589 --rc genhtml_function_coverage=1 00:09:17.589 --rc genhtml_legend=1 00:09:17.589 --rc geninfo_all_blocks=1 00:09:17.589 --rc geninfo_unexecuted_blocks=1 00:09:17.589 00:09:17.589 ' 00:09:17.589 08:27:56 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:17.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.589 --rc genhtml_branch_coverage=1 00:09:17.589 --rc genhtml_function_coverage=1 00:09:17.589 --rc genhtml_legend=1 00:09:17.589 --rc geninfo_all_blocks=1 00:09:17.589 --rc geninfo_unexecuted_blocks=1 00:09:17.589 00:09:17.589 ' 00:09:17.589 08:27:56 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:17.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.589 --rc genhtml_branch_coverage=1 00:09:17.589 --rc genhtml_function_coverage=1 00:09:17.589 --rc genhtml_legend=1 00:09:17.589 --rc geninfo_all_blocks=1 00:09:17.589 --rc geninfo_unexecuted_blocks=1 00:09:17.589 00:09:17.589 ' 00:09:17.589 08:27:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:17.589 08:27:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:17.589 08:27:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.589 08:27:56 thread -- common/autotest_common.sh@10 -- # set +x 00:09:17.589 ************************************ 00:09:17.589 START TEST thread_poller_perf 00:09:17.589 ************************************ 00:09:17.589 08:27:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:17.589 [2024-11-19 08:27:56.727933] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:17.589 [2024-11-19 08:27:56.728864] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60998 ] 00:09:17.847 [2024-11-19 08:27:56.895564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.847 [2024-11-19 08:27:56.999966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.847 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:19.281 [2024-11-19T08:27:58.577Z] ====================================== 00:09:19.281 [2024-11-19T08:27:58.577Z] busy:2211354676 (cyc) 00:09:19.281 [2024-11-19T08:27:58.577Z] total_run_count: 288000 00:09:19.281 [2024-11-19T08:27:58.577Z] tsc_hz: 2200000000 (cyc) 00:09:19.281 [2024-11-19T08:27:58.577Z] ====================================== 00:09:19.281 [2024-11-19T08:27:58.577Z] poller_cost: 7678 (cyc), 3490 (nsec) 00:09:19.281 00:09:19.281 real 0m1.551s 00:09:19.281 ************************************ 00:09:19.281 END TEST thread_poller_perf 00:09:19.281 ************************************ 00:09:19.281 user 0m1.354s 00:09:19.281 sys 0m0.084s 00:09:19.281 08:27:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.281 08:27:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:19.281 08:27:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:19.281 08:27:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:19.281 08:27:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.281 08:27:58 thread -- common/autotest_common.sh@10 -- # set +x 00:09:19.281 ************************************ 00:09:19.281 START TEST thread_poller_perf 00:09:19.281 ************************************ 00:09:19.281 08:27:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:19.281 [2024-11-19 08:27:58.338723] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:19.281 [2024-11-19 08:27:58.338886] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61040 ] 00:09:19.281 [2024-11-19 08:27:58.525390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.539 [2024-11-19 08:27:58.649520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.539 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:20.917 [2024-11-19T08:28:00.213Z] ====================================== 00:09:20.917 [2024-11-19T08:28:00.213Z] busy:2204394817 (cyc) 00:09:20.917 [2024-11-19T08:28:00.213Z] total_run_count: 3560000 00:09:20.917 [2024-11-19T08:28:00.213Z] tsc_hz: 2200000000 (cyc) 00:09:20.917 [2024-11-19T08:28:00.213Z] ====================================== 00:09:20.917 [2024-11-19T08:28:00.213Z] poller_cost: 619 (cyc), 281 (nsec) 00:09:20.917 00:09:20.917 real 0m1.584s 00:09:20.917 user 0m1.374s 00:09:20.917 sys 0m0.099s 00:09:20.917 08:27:59 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.917 08:27:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:20.917 ************************************ 00:09:20.917 END TEST thread_poller_perf 00:09:20.917 ************************************ 00:09:20.917 08:27:59 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:20.917 00:09:20.917 real 0m3.419s 00:09:20.917 user 0m2.876s 00:09:20.917 sys 0m0.317s 00:09:20.917 08:27:59 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.917 08:27:59 thread -- common/autotest_common.sh@10 -- # set +x 00:09:20.917 ************************************ 00:09:20.917 END TEST thread 00:09:20.917 ************************************ 00:09:20.917 08:27:59 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:20.917 08:27:59 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:20.917 08:27:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.917 08:27:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.917 08:27:59 -- common/autotest_common.sh@10 -- # set +x 00:09:20.917 ************************************ 00:09:20.917 START TEST app_cmdline 00:09:20.917 ************************************ 00:09:20.917 08:27:59 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:20.917 * Looking for test storage... 00:09:20.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:20.917 08:28:00 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:20.917 08:28:00 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:09:20.917 08:28:00 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:20.917 08:28:00 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.917 08:28:00 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:20.917 08:28:00 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.917 08:28:00 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:20.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.917 --rc genhtml_branch_coverage=1 00:09:20.917 --rc genhtml_function_coverage=1 00:09:20.917 --rc genhtml_legend=1 00:09:20.917 --rc geninfo_all_blocks=1 00:09:20.917 --rc geninfo_unexecuted_blocks=1 00:09:20.917 00:09:20.917 ' 00:09:20.917 08:28:00 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:20.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.917 --rc genhtml_branch_coverage=1 00:09:20.917 --rc genhtml_function_coverage=1 00:09:20.917 --rc genhtml_legend=1 00:09:20.917 --rc geninfo_all_blocks=1 00:09:20.917 --rc geninfo_unexecuted_blocks=1 00:09:20.917 00:09:20.917 ' 00:09:20.917 08:28:00 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:20.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.917 --rc genhtml_branch_coverage=1 00:09:20.917 --rc genhtml_function_coverage=1 00:09:20.917 --rc genhtml_legend=1 00:09:20.917 --rc geninfo_all_blocks=1 00:09:20.917 --rc geninfo_unexecuted_blocks=1 00:09:20.917 00:09:20.917 ' 00:09:20.917 08:28:00 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:20.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.917 --rc genhtml_branch_coverage=1 00:09:20.917 --rc genhtml_function_coverage=1 00:09:20.917 --rc genhtml_legend=1 00:09:20.917 --rc geninfo_all_blocks=1 00:09:20.917 --rc geninfo_unexecuted_blocks=1 00:09:20.917 00:09:20.917 ' 00:09:20.917 08:28:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:20.917 08:28:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61129 00:09:20.917 08:28:00 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:20.917 08:28:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61129 00:09:20.917 08:28:00 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61129 ']' 00:09:20.917 08:28:00 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.918 08:28:00 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.918 08:28:00 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.918 08:28:00 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.918 08:28:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:21.176 [2024-11-19 08:28:00.280573] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:21.176 [2024-11-19 08:28:00.280977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61129 ] 00:09:21.435 [2024-11-19 08:28:00.470733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.435 [2024-11-19 08:28:00.573768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:22.370 08:28:01 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:22.370 { 00:09:22.370 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:09:22.370 "fields": { 00:09:22.370 "major": 25, 00:09:22.370 "minor": 1, 00:09:22.370 "patch": 0, 00:09:22.370 "suffix": "-pre", 00:09:22.370 "commit": "d47eb51c9" 00:09:22.370 } 00:09:22.370 } 00:09:22.370 08:28:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:22.370 08:28:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:22.370 08:28:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:22.370 08:28:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:22.370 08:28:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:22.370 08:28:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:22.370 08:28:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.370 08:28:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:22.370 08:28:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:22.370 08:28:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:22.370 08:28:01 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:22.936 request: 00:09:22.936 { 00:09:22.936 "method": "env_dpdk_get_mem_stats", 00:09:22.936 "req_id": 1 00:09:22.936 } 00:09:22.936 Got JSON-RPC error response 00:09:22.936 response: 00:09:22.936 { 00:09:22.936 "code": -32601, 00:09:22.936 "message": "Method not found" 00:09:22.936 } 00:09:22.936 08:28:01 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:22.936 08:28:01 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:22.936 08:28:01 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:22.936 08:28:01 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:22.936 08:28:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61129 00:09:22.936 08:28:01 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61129 ']' 00:09:22.936 08:28:01 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61129 00:09:22.936 08:28:01 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:22.936 08:28:01 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.936 08:28:01 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61129 00:09:22.936 killing process with pid 61129 00:09:22.936 08:28:01 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.936 08:28:01 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.936 08:28:01 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61129' 00:09:22.936 08:28:01 app_cmdline -- common/autotest_common.sh@973 -- # kill 61129 00:09:22.937 08:28:01 app_cmdline -- common/autotest_common.sh@978 -- # wait 61129 00:09:24.839 ************************************ 00:09:24.839 END TEST app_cmdline 00:09:24.839 ************************************ 00:09:24.839 00:09:24.839 real 0m4.064s 00:09:24.839 user 0m4.641s 00:09:24.839 sys 0m0.528s 00:09:24.839 08:28:04 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.839 08:28:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:24.839 08:28:04 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:24.839 08:28:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.839 08:28:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.839 08:28:04 -- common/autotest_common.sh@10 -- # set +x 00:09:24.839 ************************************ 00:09:24.839 START TEST version 00:09:24.839 ************************************ 00:09:24.839 08:28:04 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:25.098 * Looking for test storage... 00:09:25.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:25.098 08:28:04 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.098 08:28:04 version -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.098 08:28:04 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.098 08:28:04 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.098 08:28:04 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.098 08:28:04 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.098 08:28:04 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.098 08:28:04 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.098 08:28:04 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.098 08:28:04 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.098 08:28:04 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.098 08:28:04 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.098 08:28:04 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.098 08:28:04 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.098 08:28:04 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.098 08:28:04 version -- scripts/common.sh@344 -- # case "$op" in 00:09:25.098 08:28:04 version -- scripts/common.sh@345 -- # : 1 00:09:25.098 08:28:04 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.098 08:28:04 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.098 08:28:04 version -- scripts/common.sh@365 -- # decimal 1 00:09:25.098 08:28:04 version -- scripts/common.sh@353 -- # local d=1 00:09:25.098 08:28:04 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.098 08:28:04 version -- scripts/common.sh@355 -- # echo 1 00:09:25.098 08:28:04 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.098 08:28:04 version -- scripts/common.sh@366 -- # decimal 2 00:09:25.098 08:28:04 version -- scripts/common.sh@353 -- # local d=2 00:09:25.098 08:28:04 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.098 08:28:04 version -- scripts/common.sh@355 -- # echo 2 00:09:25.098 08:28:04 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.098 08:28:04 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.098 08:28:04 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.098 08:28:04 version -- scripts/common.sh@368 -- # return 0 00:09:25.098 08:28:04 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.098 08:28:04 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.098 --rc genhtml_branch_coverage=1 00:09:25.098 --rc genhtml_function_coverage=1 00:09:25.098 --rc genhtml_legend=1 00:09:25.098 --rc geninfo_all_blocks=1 00:09:25.098 --rc geninfo_unexecuted_blocks=1 00:09:25.098 00:09:25.098 ' 00:09:25.098 08:28:04 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.098 --rc genhtml_branch_coverage=1 00:09:25.098 --rc genhtml_function_coverage=1 00:09:25.098 --rc genhtml_legend=1 00:09:25.098 --rc geninfo_all_blocks=1 00:09:25.098 --rc geninfo_unexecuted_blocks=1 00:09:25.098 00:09:25.098 ' 00:09:25.098 08:28:04 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.098 --rc genhtml_branch_coverage=1 00:09:25.098 --rc genhtml_function_coverage=1 00:09:25.098 --rc genhtml_legend=1 00:09:25.098 --rc geninfo_all_blocks=1 00:09:25.098 --rc geninfo_unexecuted_blocks=1 00:09:25.098 00:09:25.098 ' 00:09:25.098 08:28:04 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.098 --rc genhtml_branch_coverage=1 00:09:25.098 --rc genhtml_function_coverage=1 00:09:25.098 --rc genhtml_legend=1 00:09:25.098 --rc geninfo_all_blocks=1 00:09:25.098 --rc geninfo_unexecuted_blocks=1 00:09:25.098 00:09:25.098 ' 00:09:25.098 08:28:04 version -- app/version.sh@17 -- # get_header_version major 00:09:25.098 08:28:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:25.098 08:28:04 version -- app/version.sh@14 -- # cut -f2 00:09:25.098 08:28:04 version -- app/version.sh@14 -- # tr -d '"' 00:09:25.098 08:28:04 version -- app/version.sh@17 -- # major=25 00:09:25.098 08:28:04 version -- app/version.sh@18 -- # get_header_version minor 00:09:25.098 08:28:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:25.098 08:28:04 version -- app/version.sh@14 -- # cut -f2 00:09:25.098 08:28:04 version -- app/version.sh@14 -- # tr -d '"' 00:09:25.098 08:28:04 version -- app/version.sh@18 -- # minor=1 00:09:25.098 08:28:04 version -- app/version.sh@19 -- # get_header_version patch 00:09:25.099 08:28:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:25.099 08:28:04 version -- app/version.sh@14 -- # cut -f2 00:09:25.099 08:28:04 version -- app/version.sh@14 -- # tr -d '"' 00:09:25.099 08:28:04 version -- app/version.sh@19 -- # patch=0 00:09:25.099 08:28:04 version -- app/version.sh@20 -- # get_header_version suffix 00:09:25.099 08:28:04 version -- app/version.sh@14 -- # cut -f2 00:09:25.099 08:28:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:25.099 08:28:04 version -- app/version.sh@14 -- # tr -d '"' 00:09:25.099 08:28:04 version -- app/version.sh@20 -- # suffix=-pre 00:09:25.099 08:28:04 version -- app/version.sh@22 -- # version=25.1 00:09:25.099 08:28:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:25.099 08:28:04 version -- app/version.sh@28 -- # version=25.1rc0 00:09:25.099 08:28:04 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:25.099 08:28:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:25.099 08:28:04 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:25.099 08:28:04 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:25.099 00:09:25.099 real 0m0.242s 00:09:25.099 user 0m0.166s 00:09:25.099 sys 0m0.102s 00:09:25.099 ************************************ 00:09:25.099 END TEST version 00:09:25.099 ************************************ 00:09:25.099 08:28:04 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.099 08:28:04 version -- common/autotest_common.sh@10 -- # set +x 00:09:25.099 08:28:04 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:25.099 08:28:04 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:25.099 08:28:04 -- spdk/autotest.sh@194 -- # uname -s 00:09:25.099 08:28:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:25.099 08:28:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:25.099 08:28:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:25.099 08:28:04 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:09:25.099 08:28:04 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:25.099 08:28:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.099 08:28:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.099 08:28:04 -- common/autotest_common.sh@10 -- # set +x 00:09:25.099 ************************************ 00:09:25.099 START TEST blockdev_nvme 00:09:25.099 ************************************ 00:09:25.099 08:28:04 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:25.411 * Looking for test storage... 00:09:25.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:25.411 08:28:04 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.411 08:28:04 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.411 08:28:04 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.411 08:28:04 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.411 08:28:04 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:09:25.411 08:28:04 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.411 08:28:04 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.411 --rc genhtml_branch_coverage=1 00:09:25.411 --rc genhtml_function_coverage=1 00:09:25.411 --rc genhtml_legend=1 00:09:25.411 --rc geninfo_all_blocks=1 00:09:25.411 --rc geninfo_unexecuted_blocks=1 00:09:25.411 00:09:25.411 ' 00:09:25.411 08:28:04 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.411 --rc genhtml_branch_coverage=1 00:09:25.411 --rc genhtml_function_coverage=1 00:09:25.411 --rc genhtml_legend=1 00:09:25.411 --rc geninfo_all_blocks=1 00:09:25.411 --rc geninfo_unexecuted_blocks=1 00:09:25.411 00:09:25.411 ' 00:09:25.411 08:28:04 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.411 --rc genhtml_branch_coverage=1 00:09:25.411 --rc genhtml_function_coverage=1 00:09:25.411 --rc genhtml_legend=1 00:09:25.411 --rc geninfo_all_blocks=1 00:09:25.411 --rc geninfo_unexecuted_blocks=1 00:09:25.411 00:09:25.411 ' 00:09:25.411 08:28:04 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.411 --rc genhtml_branch_coverage=1 00:09:25.411 --rc genhtml_function_coverage=1 00:09:25.411 --rc genhtml_legend=1 00:09:25.411 --rc geninfo_all_blocks=1 00:09:25.411 --rc geninfo_unexecuted_blocks=1 00:09:25.411 00:09:25.411 ' 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:25.411 08:28:04 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61312 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61312 00:09:25.411 08:28:04 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61312 ']' 00:09:25.411 08:28:04 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.411 08:28:04 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:25.411 08:28:04 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.411 08:28:04 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.412 08:28:04 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.412 08:28:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.671 [2024-11-19 08:28:04.704802] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:25.671 [2024-11-19 08:28:04.705558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61312 ] 00:09:25.671 [2024-11-19 08:28:04.889148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.929 [2024-11-19 08:28:05.045695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.865 08:28:05 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.865 08:28:05 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:09:26.865 08:28:05 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:09:26.865 08:28:05 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:09:26.865 08:28:05 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:09:26.865 08:28:05 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:26.865 08:28:05 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:26.865 08:28:05 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:26.865 08:28:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.865 08:28:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.865 08:28:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.865 08:28:06 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:09:26.865 08:28:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.865 08:28:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.865 08:28:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.865 08:28:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:09:26.865 08:28:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:09:26.865 08:28:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.865 08:28:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.126 08:28:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.126 08:28:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:09:27.126 08:28:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.126 08:28:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.126 08:28:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.126 08:28:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:27.126 08:28:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.126 08:28:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.126 08:28:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.126 08:28:06 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:09:27.126 08:28:06 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:09:27.126 08:28:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.126 08:28:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.126 08:28:06 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:09:27.126 08:28:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.126 08:28:06 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:09:27.126 08:28:06 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:09:27.127 08:28:06 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "358d5e70-9223-4516-b891-04d2ce29746e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "358d5e70-9223-4516-b891-04d2ce29746e",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f019e6d5-1857-483c-a44b-ac33349a18af"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f019e6d5-1857-483c-a44b-ac33349a18af",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f4b5cf8c-dccb-4346-a34b-d5ee5686cee3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f4b5cf8c-dccb-4346-a34b-d5ee5686cee3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "cc68fa5b-f9cc-4a9a-9c78-d94c74f806b5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cc68fa5b-f9cc-4a9a-9c78-d94c74f806b5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "92bf26ec-1739-4dea-9304-12b6cf247364"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "92bf26ec-1739-4dea-9304-12b6cf247364",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "11fa7e86-c7dd-47e8-a37a-c78fb78afbd0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "11fa7e86-c7dd-47e8-a37a-c78fb78afbd0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:27.127 08:28:06 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:09:27.127 08:28:06 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:09:27.127 08:28:06 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:09:27.127 08:28:06 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61312 00:09:27.127 08:28:06 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61312 ']' 00:09:27.127 08:28:06 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61312 00:09:27.127 08:28:06 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:09:27.127 08:28:06 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.127 08:28:06 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61312 00:09:27.127 killing process with pid 61312 00:09:27.127 08:28:06 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.127 08:28:06 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.127 08:28:06 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61312' 00:09:27.127 08:28:06 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61312 00:09:27.127 08:28:06 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61312 00:09:29.659 08:28:08 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:29.659 08:28:08 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:29.659 08:28:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:29.659 08:28:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.659 08:28:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:29.659 ************************************ 00:09:29.659 START TEST bdev_hello_world 00:09:29.659 ************************************ 00:09:29.659 08:28:08 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:29.659 [2024-11-19 08:28:08.508492] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:29.659 [2024-11-19 08:28:08.508664] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61402 ] 00:09:29.659 [2024-11-19 08:28:08.678921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.659 [2024-11-19 08:28:08.786384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.226 [2024-11-19 08:28:09.402781] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:30.226 [2024-11-19 08:28:09.403026] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:30.226 [2024-11-19 08:28:09.403069] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:30.226 [2024-11-19 08:28:09.406218] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:30.226 [2024-11-19 08:28:09.406799] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:30.226 [2024-11-19 08:28:09.406845] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:30.226 [2024-11-19 08:28:09.407046] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:30.226 00:09:30.226 [2024-11-19 08:28:09.407082] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:31.160 ************************************ 00:09:31.160 END TEST bdev_hello_world 00:09:31.160 ************************************ 00:09:31.160 00:09:31.160 real 0m1.987s 00:09:31.160 user 0m1.675s 00:09:31.160 sys 0m0.200s 00:09:31.160 08:28:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.160 08:28:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:31.160 08:28:10 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:31.160 08:28:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:31.160 08:28:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.160 08:28:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.419 ************************************ 00:09:31.419 START TEST bdev_bounds 00:09:31.419 ************************************ 00:09:31.419 Process bdevio pid: 61444 00:09:31.419 08:28:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:09:31.419 08:28:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61444 00:09:31.419 08:28:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:31.419 08:28:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61444' 00:09:31.419 08:28:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:31.419 08:28:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61444 00:09:31.419 08:28:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61444 ']' 00:09:31.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.419 08:28:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.419 08:28:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.419 08:28:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.419 08:28:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.419 08:28:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:31.419 [2024-11-19 08:28:10.554320] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:31.419 [2024-11-19 08:28:10.555184] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61444 ] 00:09:31.679 [2024-11-19 08:28:10.741974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:31.679 [2024-11-19 08:28:10.864669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.679 [2024-11-19 08:28:10.864747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.679 [2024-11-19 08:28:10.864753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.613 08:28:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.613 08:28:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:09:32.613 08:28:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:32.613 I/O targets: 00:09:32.613 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:32.613 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:32.613 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:32.613 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:32.613 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:32.613 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:32.613 00:09:32.613 00:09:32.613 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.613 http://cunit.sourceforge.net/ 00:09:32.613 00:09:32.613 00:09:32.613 Suite: bdevio tests on: Nvme3n1 00:09:32.613 Test: blockdev write read block ...passed 00:09:32.613 Test: blockdev write zeroes read block ...passed 00:09:32.613 Test: blockdev write zeroes read no split ...passed 00:09:32.613 Test: blockdev write zeroes read split ...passed 00:09:32.613 Test: blockdev write zeroes read split partial ...passed 00:09:32.613 Test: blockdev reset ...[2024-11-19 08:28:11.829638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:32.613 [2024-11-19 08:28:11.833526] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:09:32.613 passed 00:09:32.613 Test: blockdev write read 8 blocks ...passed 00:09:32.613 Test: blockdev write read size > 128k ...passed 00:09:32.613 Test: blockdev write read invalid size ...passed 00:09:32.613 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:32.613 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:32.613 Test: blockdev write read max offset ...passed 00:09:32.613 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:32.613 Test: blockdev writev readv 8 blocks ...passed 00:09:32.613 Test: blockdev writev readv 30 x 1block ...passed 00:09:32.613 Test: blockdev writev readv block ...passed 00:09:32.613 Test: blockdev writev readv size > 128k ...passed 00:09:32.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:32.613 Test: blockdev comparev and writev ...[2024-11-19 08:28:11.843254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:09:32.613 Test: blockdev nvme passthru rw ...passed 00:09:32.613 Test: blockdev nvme passthru vendor specific ...passed 00:09:32.613 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2c5a0a000 len:0x1000 00:09:32.613 [2024-11-19 08:28:11.843443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:32.613 [2024-11-19 08:28:11.844156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:32.613 [2024-11-19 08:28:11.844201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:32.613 passed 00:09:32.613 Test: blockdev copy ...passed 00:09:32.613 Suite: bdevio tests on: Nvme2n3 00:09:32.613 Test: blockdev write read block ...passed 00:09:32.613 Test: blockdev write zeroes read block ...passed 00:09:32.613 Test: blockdev write zeroes read no split ...passed 00:09:32.613 Test: blockdev write zeroes read split ...passed 00:09:32.872 Test: blockdev write zeroes read split partial ...passed 00:09:32.872 Test: blockdev reset ...[2024-11-19 08:28:11.930548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:32.872 passed 00:09:32.872 Test: blockdev write read 8 blocks ...[2024-11-19 08:28:11.934697] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:32.872 passed 00:09:32.872 Test: blockdev write read size > 128k ...passed 00:09:32.872 Test: blockdev write read invalid size ...passed 00:09:32.872 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:32.872 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:32.872 Test: blockdev write read max offset ...passed 00:09:32.872 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:32.872 Test: blockdev writev readv 8 blocks ...passed 00:09:32.872 Test: blockdev writev readv 30 x 1block ...passed 00:09:32.872 Test: blockdev writev readv block ...passed 00:09:32.872 Test: blockdev writev readv size > 128k ...passed 00:09:32.872 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:32.872 Test: blockdev comparev and writev ...[2024-11-19 08:28:11.942670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a5006000 len:0x1000 00:09:32.872 [2024-11-19 08:28:11.942733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:32.872 passed 00:09:32.872 Test: blockdev nvme passthru rw ...passed 00:09:32.872 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:28:11.943464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:32.872 passed 00:09:32.872 Test: blockdev nvme admin passthru ...[2024-11-19 08:28:11.943508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:32.872 passed 00:09:32.872 Test: blockdev copy ...passed 00:09:32.872 Suite: bdevio tests on: Nvme2n2 00:09:32.872 Test: blockdev write read block ...passed 00:09:32.872 Test: blockdev write zeroes read block ...passed 00:09:32.872 Test: blockdev write zeroes read no split ...passed 00:09:32.872 Test: blockdev write zeroes read split ...passed 00:09:32.872 Test: blockdev write zeroes read split partial ...passed 00:09:32.872 Test: blockdev reset ...[2024-11-19 08:28:12.020092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:32.872 [2024-11-19 08:28:12.024265] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:32.872 passed 00:09:32.872 Test: blockdev write read 8 blocks ...passed 00:09:32.872 Test: blockdev write read size > 128k ...passed 00:09:32.872 Test: blockdev write read invalid size ...passed 00:09:32.872 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:32.872 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:32.872 Test: blockdev write read max offset ...passed 00:09:32.872 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:32.872 Test: blockdev writev readv 8 blocks ...passed 00:09:32.872 Test: blockdev writev readv 30 x 1block ...passed 00:09:32.872 Test: blockdev writev readv block ...passed 00:09:32.872 Test: blockdev writev readv size > 128k ...passed 00:09:32.872 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:32.872 Test: blockdev comparev and writev ...passed 00:09:32.872 Test: blockdev nvme passthru rw ...[2024-11-19 08:28:12.031729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e123c000 len:0x1000 00:09:32.872 [2024-11-19 08:28:12.031790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:32.872 passed 00:09:32.872 Test: blockdev nvme passthru vendor specific ...passed 00:09:32.872 Test: blockdev nvme admin passthru ...[2024-11-19 08:28:12.032483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:32.872 [2024-11-19 08:28:12.032531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:32.872 passed 00:09:32.872 Test: blockdev copy ...passed 00:09:32.872 Suite: bdevio tests on: Nvme2n1 00:09:32.872 Test: blockdev write read block ...passed 00:09:32.872 Test: blockdev write zeroes read block ...passed 00:09:32.872 Test: blockdev write zeroes read no split ...passed 00:09:32.872 Test: blockdev write zeroes read split ...passed 00:09:32.872 Test: blockdev write zeroes read split partial ...passed 00:09:32.872 Test: blockdev reset ...[2024-11-19 08:28:12.099719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:32.872 [2024-11-19 08:28:12.103828] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:32.872 passed 00:09:32.872 Test: blockdev write read 8 blocks ...passed 00:09:32.872 Test: blockdev write read size > 128k ...passed 00:09:32.872 Test: blockdev write read invalid size ...passed 00:09:32.872 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:32.872 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:32.872 Test: blockdev write read max offset ...passed 00:09:32.872 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:32.872 Test: blockdev writev readv 8 blocks ...passed 00:09:32.872 Test: blockdev writev readv 30 x 1block ...passed 00:09:32.872 Test: blockdev writev readv block ...passed 00:09:32.872 Test: blockdev writev readv size > 128k ...passed 00:09:32.872 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:32.872 Test: blockdev comparev and writev ...passed 00:09:32.872 Test: blockdev nvme passthru rw ...[2024-11-19 08:28:12.110786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e1238000 len:0x1000 00:09:32.872 [2024-11-19 08:28:12.110852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:32.872 passed 00:09:32.872 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:28:12.111514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:32.872 [2024-11-19 08:28:12.111556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:32.872 passed 00:09:32.872 Test: blockdev nvme admin passthru ...passed 00:09:32.872 Test: blockdev copy ...passed 00:09:32.872 Suite: bdevio tests on: Nvme1n1 00:09:32.872 Test: blockdev write read block ...passed 00:09:32.872 Test: blockdev write zeroes read block ...passed 00:09:32.872 Test: blockdev write zeroes read no split ...passed 00:09:32.872 Test: blockdev write zeroes read split ...passed 00:09:33.131 Test: blockdev write zeroes read split partial ...passed 00:09:33.131 Test: blockdev reset ...[2024-11-19 08:28:12.186649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:33.131 [2024-11-19 08:28:12.190314] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:09:33.131 Test: blockdev write read 8 blocks ...passed 00:09:33.131 Test: blockdev write read size > 128k ...uccessful. 00:09:33.131 passed 00:09:33.131 Test: blockdev write read invalid size ...passed 00:09:33.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:33.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:33.131 Test: blockdev write read max offset ...passed 00:09:33.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:33.131 Test: blockdev writev readv 8 blocks ...passed 00:09:33.131 Test: blockdev writev readv 30 x 1block ...passed 00:09:33.131 Test: blockdev writev readv block ...passed 00:09:33.131 Test: blockdev writev readv size > 128k ...passed 00:09:33.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:33.131 Test: blockdev comparev and writev ...[2024-11-19 08:28:12.198023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e1234000 len:0x1000 00:09:33.131 [2024-11-19 08:28:12.198089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:33.131 passed 00:09:33.131 Test: blockdev nvme passthru rw ...passed 00:09:33.131 Test: blockdev nvme passthru vendor specific ...passed 00:09:33.131 Test: blockdev nvme admin passthru ...[2024-11-19 08:28:12.198827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:33.131 [2024-11-19 08:28:12.198887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:33.131 passed 00:09:33.131 Test: blockdev copy ...passed 00:09:33.131 Suite: bdevio tests on: Nvme0n1 00:09:33.131 Test: blockdev write read block ...passed 00:09:33.131 Test: blockdev write zeroes read block ...passed 00:09:33.131 Test: blockdev write zeroes read no split ...passed 00:09:33.131 Test: blockdev write zeroes read split ...passed 00:09:33.131 Test: blockdev write zeroes read split partial ...passed 00:09:33.131 Test: blockdev reset ...[2024-11-19 08:28:12.277569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:33.131 passed 00:09:33.131 Test: blockdev write read 8 blocks ...[2024-11-19 08:28:12.281491] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:33.131 passed 00:09:33.131 Test: blockdev write read size > 128k ...passed 00:09:33.131 Test: blockdev write read invalid size ...passed 00:09:33.131 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:33.131 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:33.131 Test: blockdev write read max offset ...passed 00:09:33.131 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:33.131 Test: blockdev writev readv 8 blocks ...passed 00:09:33.131 Test: blockdev writev readv 30 x 1block ...passed 00:09:33.131 Test: blockdev writev readv block ...passed 00:09:33.131 Test: blockdev writev readv size > 128k ...passed 00:09:33.131 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:33.131 Test: blockdev comparev and writev ...passed 00:09:33.131 Test: blockdev nvme passthru rw ...[2024-11-19 08:28:12.288916] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:33.131 separate metadata which is not supported yet. 00:09:33.131 passed 00:09:33.131 Test: blockdev nvme passthru vendor specific ...passed 00:09:33.131 Test: blockdev nvme admin passthru ...[2024-11-19 08:28:12.289619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:33.131 [2024-11-19 08:28:12.289682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:33.131 passed 00:09:33.131 Test: blockdev copy ...passed 00:09:33.131 00:09:33.131 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.131 suites 6 6 n/a 0 0 00:09:33.131 tests 138 138 138 0 0 00:09:33.131 asserts 893 893 893 0 n/a 00:09:33.131 00:09:33.131 Elapsed time = 1.472 seconds 00:09:33.131 0 00:09:33.131 08:28:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61444 00:09:33.131 08:28:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61444 ']' 00:09:33.131 08:28:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61444 00:09:33.131 08:28:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:09:33.131 08:28:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.131 08:28:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61444 00:09:33.131 killing process with pid 61444 00:09:33.131 08:28:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.131 08:28:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.131 08:28:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61444' 00:09:33.131 08:28:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61444 00:09:33.131 08:28:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61444 00:09:34.067 ************************************ 00:09:34.067 END TEST bdev_bounds 00:09:34.067 ************************************ 00:09:34.067 08:28:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:34.067 00:09:34.067 real 0m2.790s 00:09:34.067 user 0m7.345s 00:09:34.067 sys 0m0.353s 00:09:34.067 08:28:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.067 08:28:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:34.067 08:28:13 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:34.067 08:28:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:34.067 08:28:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.067 08:28:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:34.067 ************************************ 00:09:34.067 START TEST bdev_nbd 00:09:34.067 ************************************ 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61509 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61509 /var/tmp/spdk-nbd.sock 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61509 ']' 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.067 08:28:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:34.326 [2024-11-19 08:28:13.431377] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:34.326 [2024-11-19 08:28:13.431538] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.584 [2024-11-19 08:28:13.621297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.584 [2024-11-19 08:28:13.744424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:35.151 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.719 1+0 records in 00:09:35.719 1+0 records out 00:09:35.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594719 s, 6.9 MB/s 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:35.719 08:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:35.978 1+0 records in 00:09:35.978 1+0 records out 00:09:35.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056018 s, 7.3 MB/s 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:35.978 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:36.236 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:36.236 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:36.236 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:36.236 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:09:36.236 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:36.236 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:36.236 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:36.237 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:09:36.237 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:36.237 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:36.237 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:36.237 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.237 1+0 records in 00:09:36.237 1+0 records out 00:09:36.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718634 s, 5.7 MB/s 00:09:36.237 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.237 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:36.237 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.237 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:36.237 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:36.237 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:36.237 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:36.495 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:36.754 1+0 records in 00:09:36.754 1+0 records out 00:09:36.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707235 s, 5.8 MB/s 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:36.754 08:28:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:37.013 1+0 records in 00:09:37.013 1+0 records out 00:09:37.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000756739 s, 5.4 MB/s 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:37.013 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:37.272 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:37.272 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:37.272 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:37.272 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:09:37.272 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:37.272 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:37.272 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:37.272 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:09:37.272 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:37.272 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:37.272 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:37.272 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:37.272 1+0 records in 00:09:37.272 1+0 records out 00:09:37.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706294 s, 5.8 MB/s 00:09:37.532 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:37.532 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:37.532 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:37.532 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:37.532 08:28:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:37.532 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:37.532 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:37.532 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:37.790 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:37.790 { 00:09:37.790 "nbd_device": "/dev/nbd0", 00:09:37.790 "bdev_name": "Nvme0n1" 00:09:37.790 }, 00:09:37.790 { 00:09:37.790 "nbd_device": "/dev/nbd1", 00:09:37.790 "bdev_name": "Nvme1n1" 00:09:37.790 }, 00:09:37.790 { 00:09:37.790 "nbd_device": "/dev/nbd2", 00:09:37.790 "bdev_name": "Nvme2n1" 00:09:37.790 }, 00:09:37.790 { 00:09:37.790 "nbd_device": "/dev/nbd3", 00:09:37.790 "bdev_name": "Nvme2n2" 00:09:37.790 }, 00:09:37.790 { 00:09:37.790 "nbd_device": "/dev/nbd4", 00:09:37.790 "bdev_name": "Nvme2n3" 00:09:37.790 }, 00:09:37.790 { 00:09:37.790 "nbd_device": "/dev/nbd5", 00:09:37.790 "bdev_name": "Nvme3n1" 00:09:37.790 } 00:09:37.790 ]' 00:09:37.790 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:37.790 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:37.790 { 00:09:37.790 "nbd_device": "/dev/nbd0", 00:09:37.790 "bdev_name": "Nvme0n1" 00:09:37.790 }, 00:09:37.790 { 00:09:37.790 "nbd_device": "/dev/nbd1", 00:09:37.790 "bdev_name": "Nvme1n1" 00:09:37.790 }, 00:09:37.790 { 00:09:37.790 "nbd_device": "/dev/nbd2", 00:09:37.790 "bdev_name": "Nvme2n1" 00:09:37.790 }, 00:09:37.790 { 00:09:37.790 "nbd_device": "/dev/nbd3", 00:09:37.790 "bdev_name": "Nvme2n2" 00:09:37.790 }, 00:09:37.790 { 00:09:37.790 "nbd_device": "/dev/nbd4", 00:09:37.790 "bdev_name": "Nvme2n3" 00:09:37.790 }, 00:09:37.790 { 00:09:37.790 "nbd_device": "/dev/nbd5", 00:09:37.790 "bdev_name": "Nvme3n1" 00:09:37.790 } 00:09:37.790 ]' 00:09:37.790 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:37.790 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:37.790 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.790 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:37.790 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:37.790 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:37.790 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:37.790 08:28:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:38.048 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:38.048 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:38.048 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:38.048 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.048 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.048 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:38.048 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.048 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.048 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.048 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:38.306 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:38.306 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:38.306 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:38.306 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.306 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.306 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:38.306 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.306 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.306 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.306 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:38.565 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:38.565 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:38.565 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:38.565 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.565 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.565 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:38.565 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.565 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.565 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.565 08:28:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:39.134 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:39.134 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:39.134 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:39.134 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.134 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.134 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:39.134 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:39.134 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.134 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:39.134 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:39.393 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:39.393 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:39.393 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:39.393 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.393 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.393 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:39.393 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:39.393 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.393 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:39.393 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:39.652 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:39.652 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:39.652 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:39.652 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:39.652 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:39.652 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:39.652 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:39.652 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:39.652 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:39.652 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.652 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:39.912 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:39.912 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:39.912 08:28:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:39.912 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:40.171 /dev/nbd0 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:40.171 1+0 records in 00:09:40.171 1+0 records out 00:09:40.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000746216 s, 5.5 MB/s 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:40.171 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:40.738 /dev/nbd1 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:40.738 1+0 records in 00:09:40.738 1+0 records out 00:09:40.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730404 s, 5.6 MB/s 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:40.738 08:28:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:40.996 /dev/nbd10 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:40.996 1+0 records in 00:09:40.996 1+0 records out 00:09:40.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543826 s, 7.5 MB/s 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:40.996 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:41.255 /dev/nbd11 00:09:41.255 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:41.255 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:41.255 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:09:41.255 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:41.255 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:41.255 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:41.255 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:09:41.255 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:41.255 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:41.256 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:41.256 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:41.256 1+0 records in 00:09:41.256 1+0 records out 00:09:41.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052974 s, 7.7 MB/s 00:09:41.256 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:41.256 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:41.256 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:41.256 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:41.256 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:41.256 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:41.256 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:41.256 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:41.515 /dev/nbd12 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:41.515 1+0 records in 00:09:41.515 1+0 records out 00:09:41.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611558 s, 6.7 MB/s 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:41.515 08:28:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:42.082 /dev/nbd13 00:09:42.082 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:42.082 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:42.083 1+0 records in 00:09:42.083 1+0 records out 00:09:42.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535932 s, 7.6 MB/s 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.083 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:42.342 { 00:09:42.342 "nbd_device": "/dev/nbd0", 00:09:42.342 "bdev_name": "Nvme0n1" 00:09:42.342 }, 00:09:42.342 { 00:09:42.342 "nbd_device": "/dev/nbd1", 00:09:42.342 "bdev_name": "Nvme1n1" 00:09:42.342 }, 00:09:42.342 { 00:09:42.342 "nbd_device": "/dev/nbd10", 00:09:42.342 "bdev_name": "Nvme2n1" 00:09:42.342 }, 00:09:42.342 { 00:09:42.342 "nbd_device": "/dev/nbd11", 00:09:42.342 "bdev_name": "Nvme2n2" 00:09:42.342 }, 00:09:42.342 { 00:09:42.342 "nbd_device": "/dev/nbd12", 00:09:42.342 "bdev_name": "Nvme2n3" 00:09:42.342 }, 00:09:42.342 { 00:09:42.342 "nbd_device": "/dev/nbd13", 00:09:42.342 "bdev_name": "Nvme3n1" 00:09:42.342 } 00:09:42.342 ]' 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:42.342 { 00:09:42.342 "nbd_device": "/dev/nbd0", 00:09:42.342 "bdev_name": "Nvme0n1" 00:09:42.342 }, 00:09:42.342 { 00:09:42.342 "nbd_device": "/dev/nbd1", 00:09:42.342 "bdev_name": "Nvme1n1" 00:09:42.342 }, 00:09:42.342 { 00:09:42.342 "nbd_device": "/dev/nbd10", 00:09:42.342 "bdev_name": "Nvme2n1" 00:09:42.342 }, 00:09:42.342 { 00:09:42.342 "nbd_device": "/dev/nbd11", 00:09:42.342 "bdev_name": "Nvme2n2" 00:09:42.342 }, 00:09:42.342 { 00:09:42.342 "nbd_device": "/dev/nbd12", 00:09:42.342 "bdev_name": "Nvme2n3" 00:09:42.342 }, 00:09:42.342 { 00:09:42.342 "nbd_device": "/dev/nbd13", 00:09:42.342 "bdev_name": "Nvme3n1" 00:09:42.342 } 00:09:42.342 ]' 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:42.342 /dev/nbd1 00:09:42.342 /dev/nbd10 00:09:42.342 /dev/nbd11 00:09:42.342 /dev/nbd12 00:09:42.342 /dev/nbd13' 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:42.342 /dev/nbd1 00:09:42.342 /dev/nbd10 00:09:42.342 /dev/nbd11 00:09:42.342 /dev/nbd12 00:09:42.342 /dev/nbd13' 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:42.342 256+0 records in 00:09:42.342 256+0 records out 00:09:42.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0091506 s, 115 MB/s 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:42.342 256+0 records in 00:09:42.342 256+0 records out 00:09:42.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14836 s, 7.1 MB/s 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:42.342 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:42.601 256+0 records in 00:09:42.601 256+0 records out 00:09:42.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143784 s, 7.3 MB/s 00:09:42.601 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:42.601 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:42.860 256+0 records in 00:09:42.860 256+0 records out 00:09:42.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156691 s, 6.7 MB/s 00:09:42.860 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:42.860 08:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:42.860 256+0 records in 00:09:42.860 256+0 records out 00:09:42.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166447 s, 6.3 MB/s 00:09:42.860 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:42.860 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:43.119 256+0 records in 00:09:43.119 256+0 records out 00:09:43.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15779 s, 6.6 MB/s 00:09:43.119 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:43.119 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:43.437 256+0 records in 00:09:43.437 256+0 records out 00:09:43.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15448 s, 6.8 MB/s 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.437 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:43.711 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:43.711 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:43.711 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:43.711 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.711 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.711 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:43.711 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:43.711 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.711 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.711 08:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:43.969 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:43.969 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:43.969 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:43.969 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.969 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.969 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:43.969 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:43.969 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.969 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.969 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:44.228 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:44.228 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:44.228 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:44.228 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:44.228 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:44.228 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:44.228 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:44.228 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:44.228 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:44.228 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:44.486 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:44.486 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:44.486 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:44.486 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:44.486 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:44.486 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:44.486 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:44.486 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:44.486 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:44.486 08:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:45.051 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:45.051 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:45.051 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:45.051 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.051 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.051 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:45.051 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:45.051 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.051 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:45.051 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:45.308 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:45.308 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:45.308 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:45.308 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:45.308 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:45.308 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:45.308 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:45.308 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:45.308 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:45.308 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.308 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:45.566 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:45.822 malloc_lvol_verify 00:09:45.822 08:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:46.079 6de7565b-79ee-4bc2-aae3-a0d51509f6c9 00:09:46.079 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:46.337 8eabbafb-879a-4bd3-b4fc-8057058e6af9 00:09:46.337 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:46.595 /dev/nbd0 00:09:46.595 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:46.595 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:46.595 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:46.595 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:46.595 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:46.595 mke2fs 1.47.0 (5-Feb-2023) 00:09:46.595 Discarding device blocks: 0/4096 done 00:09:46.595 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:46.595 00:09:46.595 Allocating group tables: 0/1 done 00:09:46.595 Writing inode tables: 0/1 done 00:09:46.595 Creating journal (1024 blocks): done 00:09:46.595 Writing superblocks and filesystem accounting information: 0/1 done 00:09:46.595 00:09:46.595 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:46.595 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.595 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:46.595 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:46.595 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:46.595 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.595 08:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61509 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61509 ']' 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61509 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.853 08:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61509 00:09:47.111 08:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.111 08:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.111 killing process with pid 61509 00:09:47.111 08:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61509' 00:09:47.111 08:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61509 00:09:47.111 08:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61509 00:09:48.043 08:28:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:48.043 00:09:48.043 real 0m13.903s 00:09:48.043 user 0m20.373s 00:09:48.043 sys 0m4.194s 00:09:48.043 08:28:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.043 08:28:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:48.043 ************************************ 00:09:48.043 END TEST bdev_nbd 00:09:48.043 ************************************ 00:09:48.043 08:28:27 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:48.043 08:28:27 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:09:48.043 skipping fio tests on NVMe due to multi-ns failures. 00:09:48.043 08:28:27 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:48.043 08:28:27 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:48.043 08:28:27 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:48.043 08:28:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:48.043 08:28:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.043 08:28:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:48.043 ************************************ 00:09:48.043 START TEST bdev_verify 00:09:48.043 ************************************ 00:09:48.043 08:28:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:48.301 [2024-11-19 08:28:27.348312] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:48.301 [2024-11-19 08:28:27.348487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61922 ] 00:09:48.301 [2024-11-19 08:28:27.533412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.560 [2024-11-19 08:28:27.661421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.560 [2024-11-19 08:28:27.661425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.126 Running I/O for 5 seconds... 00:09:51.506 19392.00 IOPS, 75.75 MiB/s [2024-11-19T08:28:31.736Z] 19072.00 IOPS, 74.50 MiB/s [2024-11-19T08:28:32.670Z] 19456.00 IOPS, 76.00 MiB/s [2024-11-19T08:28:33.604Z] 19696.00 IOPS, 76.94 MiB/s [2024-11-19T08:28:33.604Z] 19340.80 IOPS, 75.55 MiB/s 00:09:54.308 Latency(us) 00:09:54.308 [2024-11-19T08:28:33.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.308 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:54.308 Verification LBA range: start 0x0 length 0xbd0bd 00:09:54.308 Nvme0n1 : 5.05 1545.48 6.04 0.00 0.00 82573.48 15966.95 116773.24 00:09:54.308 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:54.308 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:54.308 Nvme0n1 : 5.03 1628.03 6.36 0.00 0.00 78339.75 15966.95 102951.10 00:09:54.308 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:54.308 Verification LBA range: start 0x0 length 0xa0000 00:09:54.308 Nvme1n1 : 5.05 1544.96 6.04 0.00 0.00 82493.82 17754.30 108670.60 00:09:54.308 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:54.308 Verification LBA range: start 0xa0000 length 0xa0000 00:09:54.308 Nvme1n1 : 5.07 1640.35 6.41 0.00 0.00 77682.44 10485.76 97708.22 00:09:54.308 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:54.308 Verification LBA range: start 0x0 length 0x80000 00:09:54.308 Nvme2n1 : 5.06 1544.45 6.03 0.00 0.00 82404.95 16801.05 116773.24 00:09:54.308 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:54.308 Verification LBA range: start 0x80000 length 0x80000 00:09:54.308 Nvme2n1 : 5.07 1639.88 6.41 0.00 0.00 77545.11 10307.03 93895.21 00:09:54.308 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:54.329 Verification LBA range: start 0x0 length 0x80000 00:09:54.329 Nvme2n2 : 5.06 1543.94 6.03 0.00 0.00 82302.41 16324.42 120109.61 00:09:54.329 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:54.329 Verification LBA range: start 0x80000 length 0x80000 00:09:54.329 Nvme2n2 : 5.07 1639.41 6.40 0.00 0.00 77415.45 10128.29 96278.34 00:09:54.329 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:54.329 Verification LBA range: start 0x0 length 0x80000 00:09:54.329 Nvme2n3 : 5.06 1543.44 6.03 0.00 0.00 82199.66 14775.39 120109.61 00:09:54.329 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:54.329 Verification LBA range: start 0x80000 length 0x80000 00:09:54.329 Nvme2n3 : 5.08 1638.95 6.40 0.00 0.00 77310.50 10307.03 99138.09 00:09:54.329 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:54.329 Verification LBA range: start 0x0 length 0x20000 00:09:54.329 Nvme3n1 : 5.07 1553.77 6.07 0.00 0.00 81613.16 3232.12 120109.61 00:09:54.329 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:54.329 Verification LBA range: start 0x20000 length 0x20000 00:09:54.329 Nvme3n1 : 5.08 1638.49 6.40 0.00 0.00 77235.78 10426.18 103427.72 00:09:54.329 [2024-11-19T08:28:33.625Z] =================================================================================================================== 00:09:54.329 [2024-11-19T08:28:33.625Z] Total : 19101.17 74.61 0.00 0.00 79855.31 3232.12 120109.61 00:09:55.702 00:09:55.702 real 0m7.487s 00:09:55.702 user 0m13.809s 00:09:55.702 sys 0m0.266s 00:09:55.702 08:28:34 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.702 08:28:34 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:55.702 ************************************ 00:09:55.702 END TEST bdev_verify 00:09:55.702 ************************************ 00:09:55.702 08:28:34 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:55.702 08:28:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:55.702 08:28:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.702 08:28:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:55.702 ************************************ 00:09:55.702 START TEST bdev_verify_big_io 00:09:55.702 ************************************ 00:09:55.702 08:28:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:55.702 [2024-11-19 08:28:34.888577] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:09:55.702 [2024-11-19 08:28:34.888774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62026 ] 00:09:55.960 [2024-11-19 08:28:35.075674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:55.960 [2024-11-19 08:28:35.203345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.960 [2024-11-19 08:28:35.203353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.895 Running I/O for 5 seconds... 00:10:00.804 817.00 IOPS, 51.06 MiB/s [2024-11-19T08:28:42.002Z] 1780.50 IOPS, 111.28 MiB/s [2024-11-19T08:28:42.261Z] 2046.00 IOPS, 127.87 MiB/s 00:10:02.965 Latency(us) 00:10:02.965 [2024-11-19T08:28:42.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.965 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:02.965 Verification LBA range: start 0x0 length 0xbd0b 00:10:02.965 Nvme0n1 : 5.91 113.79 7.11 0.00 0.00 1076274.62 15728.64 1121023.07 00:10:02.965 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:02.965 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:02.965 Nvme0n1 : 5.74 122.67 7.67 0.00 0.00 1006076.78 17396.83 1044763.00 00:10:02.965 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:02.965 Verification LBA range: start 0x0 length 0xa000 00:10:02.965 Nvme1n1 : 5.81 106.13 6.63 0.00 0.00 1116528.34 115343.36 1853119.77 00:10:02.965 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:02.965 Verification LBA range: start 0xa000 length 0xa000 00:10:02.965 Nvme1n1 : 5.86 127.49 7.97 0.00 0.00 945684.56 70063.94 1044763.00 00:10:02.965 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:02.965 Verification LBA range: start 0x0 length 0x8000 00:10:02.965 Nvme2n1 : 5.93 111.38 6.96 0.00 0.00 1036238.62 90082.21 1875997.79 00:10:02.965 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:02.965 Verification LBA range: start 0x8000 length 0x8000 00:10:02.965 Nvme2n1 : 5.86 126.27 7.89 0.00 0.00 923152.66 70540.57 880803.84 00:10:02.965 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:02.965 Verification LBA range: start 0x0 length 0x8000 00:10:02.965 Nvme2n2 : 5.91 116.37 7.27 0.00 0.00 958674.95 91512.09 1372681.31 00:10:02.965 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:02.965 Verification LBA range: start 0x8000 length 0x8000 00:10:02.965 Nvme2n2 : 5.86 125.74 7.86 0.00 0.00 896568.37 67680.81 922746.88 00:10:02.965 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:02.965 Verification LBA range: start 0x0 length 0x8000 00:10:02.965 Nvme2n3 : 5.94 120.98 7.56 0.00 0.00 898383.46 14120.03 1967509.88 00:10:02.965 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:02.965 Verification LBA range: start 0x8000 length 0x8000 00:10:02.965 Nvme2n3 : 5.91 133.95 8.37 0.00 0.00 822990.74 45517.73 949437.91 00:10:02.965 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:02.965 Verification LBA range: start 0x0 length 0x2000 00:10:02.965 Nvme3n1 : 6.02 145.88 9.12 0.00 0.00 724313.37 1675.64 2013265.92 00:10:02.965 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:02.965 Verification LBA range: start 0x2000 length 0x2000 00:10:02.965 Nvme3n1 : 5.93 146.27 9.14 0.00 0.00 734144.50 5332.25 968502.92 00:10:02.965 [2024-11-19T08:28:42.261Z] =================================================================================================================== 00:10:02.965 [2024-11-19T08:28:42.261Z] Total : 1496.91 93.56 0.00 0.00 916499.88 1675.64 2013265.92 00:10:04.865 00:10:04.865 real 0m8.875s 00:10:04.865 user 0m16.556s 00:10:04.865 sys 0m0.292s 00:10:04.865 08:28:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.865 08:28:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:04.865 ************************************ 00:10:04.865 END TEST bdev_verify_big_io 00:10:04.865 ************************************ 00:10:04.865 08:28:43 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:04.865 08:28:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:04.865 08:28:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.865 08:28:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.865 ************************************ 00:10:04.865 START TEST bdev_write_zeroes 00:10:04.865 ************************************ 00:10:04.865 08:28:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:04.865 [2024-11-19 08:28:43.818955] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:04.865 [2024-11-19 08:28:43.819149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62140 ] 00:10:04.865 [2024-11-19 08:28:44.009594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.865 [2024-11-19 08:28:44.140098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.799 Running I/O for 1 seconds... 00:10:06.733 44160.00 IOPS, 172.50 MiB/s 00:10:06.733 Latency(us) 00:10:06.733 [2024-11-19T08:28:46.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.733 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:06.733 Nvme0n1 : 1.03 7351.49 28.72 0.00 0.00 17368.19 11915.64 43611.23 00:10:06.733 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:06.733 Nvme1n1 : 1.03 7342.39 28.68 0.00 0.00 17369.68 12153.95 43611.23 00:10:06.733 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:06.733 Nvme2n1 : 1.03 7333.20 28.65 0.00 0.00 17335.02 11975.21 43134.60 00:10:06.733 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:06.733 Nvme2n2 : 1.03 7324.05 28.61 0.00 0.00 17274.90 10604.92 42896.29 00:10:06.733 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:06.733 Nvme2n3 : 1.03 7315.11 28.57 0.00 0.00 17248.61 7745.16 42657.98 00:10:06.733 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:06.733 Nvme3n1 : 1.03 7306.27 28.54 0.00 0.00 17224.35 6821.70 43372.92 00:10:06.733 [2024-11-19T08:28:46.029Z] =================================================================================================================== 00:10:06.733 [2024-11-19T08:28:46.029Z] Total : 43972.51 171.77 0.00 0.00 17303.46 6821.70 43611.23 00:10:07.668 00:10:07.668 real 0m3.160s 00:10:07.668 user 0m2.779s 00:10:07.668 sys 0m0.252s 00:10:07.668 08:28:46 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.668 08:28:46 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:07.668 ************************************ 00:10:07.668 END TEST bdev_write_zeroes 00:10:07.668 ************************************ 00:10:07.668 08:28:46 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:07.668 08:28:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:07.668 08:28:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.668 08:28:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:07.668 ************************************ 00:10:07.668 START TEST bdev_json_nonenclosed 00:10:07.668 ************************************ 00:10:07.668 08:28:46 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:07.927 [2024-11-19 08:28:47.018515] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:07.927 [2024-11-19 08:28:47.018710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62199 ] 00:10:07.927 [2024-11-19 08:28:47.199554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.186 [2024-11-19 08:28:47.321913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.186 [2024-11-19 08:28:47.322027] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:08.186 [2024-11-19 08:28:47.322056] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:08.186 [2024-11-19 08:28:47.322070] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:08.444 00:10:08.444 real 0m0.684s 00:10:08.444 user 0m0.460s 00:10:08.444 sys 0m0.118s 00:10:08.444 08:28:47 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.444 08:28:47 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:08.444 ************************************ 00:10:08.444 END TEST bdev_json_nonenclosed 00:10:08.444 ************************************ 00:10:08.444 08:28:47 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:08.444 08:28:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:08.444 08:28:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.444 08:28:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.444 ************************************ 00:10:08.444 START TEST bdev_json_nonarray 00:10:08.444 ************************************ 00:10:08.444 08:28:47 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:08.444 [2024-11-19 08:28:47.728297] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:08.444 [2024-11-19 08:28:47.728446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62219 ] 00:10:08.701 [2024-11-19 08:28:47.906599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.960 [2024-11-19 08:28:48.031592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.960 [2024-11-19 08:28:48.031745] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:08.960 [2024-11-19 08:28:48.031779] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:08.960 [2024-11-19 08:28:48.031796] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:09.220 00:10:09.220 real 0m0.668s 00:10:09.220 user 0m0.440s 00:10:09.220 sys 0m0.123s 00:10:09.220 08:28:48 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.220 08:28:48 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:09.220 ************************************ 00:10:09.220 END TEST bdev_json_nonarray 00:10:09.220 ************************************ 00:10:09.220 08:28:48 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:10:09.220 08:28:48 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:10:09.220 08:28:48 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:10:09.220 08:28:48 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:10:09.220 08:28:48 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:10:09.220 08:28:48 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:09.220 08:28:48 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:09.220 08:28:48 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:09.220 08:28:48 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:09.220 08:28:48 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:09.220 08:28:48 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:09.220 00:10:09.220 real 0m43.987s 00:10:09.220 user 1m7.864s 00:10:09.220 sys 0m6.621s 00:10:09.220 08:28:48 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.220 ************************************ 00:10:09.220 END TEST blockdev_nvme 00:10:09.220 08:28:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:09.220 ************************************ 00:10:09.220 08:28:48 -- spdk/autotest.sh@209 -- # uname -s 00:10:09.220 08:28:48 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:10:09.220 08:28:48 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:09.220 08:28:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:09.220 08:28:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.220 08:28:48 -- common/autotest_common.sh@10 -- # set +x 00:10:09.220 ************************************ 00:10:09.220 START TEST blockdev_nvme_gpt 00:10:09.220 ************************************ 00:10:09.220 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:09.220 * Looking for test storage... 00:10:09.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:09.220 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:09.220 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:10:09.220 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:09.479 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.479 08:28:48 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:10:09.479 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.479 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:09.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.479 --rc genhtml_branch_coverage=1 00:10:09.479 --rc genhtml_function_coverage=1 00:10:09.479 --rc genhtml_legend=1 00:10:09.479 --rc geninfo_all_blocks=1 00:10:09.479 --rc geninfo_unexecuted_blocks=1 00:10:09.479 00:10:09.479 ' 00:10:09.479 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:09.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.479 --rc genhtml_branch_coverage=1 00:10:09.479 --rc genhtml_function_coverage=1 00:10:09.479 --rc genhtml_legend=1 00:10:09.479 --rc geninfo_all_blocks=1 00:10:09.479 --rc geninfo_unexecuted_blocks=1 00:10:09.479 00:10:09.479 ' 00:10:09.479 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:09.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.479 --rc genhtml_branch_coverage=1 00:10:09.479 --rc genhtml_function_coverage=1 00:10:09.479 --rc genhtml_legend=1 00:10:09.479 --rc geninfo_all_blocks=1 00:10:09.479 --rc geninfo_unexecuted_blocks=1 00:10:09.479 00:10:09.479 ' 00:10:09.479 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:09.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.479 --rc genhtml_branch_coverage=1 00:10:09.480 --rc genhtml_function_coverage=1 00:10:09.480 --rc genhtml_legend=1 00:10:09.480 --rc geninfo_all_blocks=1 00:10:09.480 --rc geninfo_unexecuted_blocks=1 00:10:09.480 00:10:09.480 ' 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62303 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:09.480 08:28:48 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62303 00:10:09.480 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62303 ']' 00:10:09.480 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.480 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.480 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.480 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.480 08:28:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:09.480 [2024-11-19 08:28:48.738654] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:09.480 [2024-11-19 08:28:48.738837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62303 ] 00:10:09.742 [2024-11-19 08:28:48.927186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.001 [2024-11-19 08:28:49.088076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.934 08:28:49 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.934 08:28:49 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:10:10.934 08:28:49 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:10:10.934 08:28:49 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:10:10.934 08:28:49 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:10.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:11.192 Waiting for block devices as requested 00:10:11.192 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.450 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.450 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.450 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:16.774 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:10:16.774 BYT; 00:10:16.774 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:10:16.774 BYT; 00:10:16.774 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:16.774 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:10:16.774 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:16.775 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:16.775 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:16.775 08:28:55 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:16.775 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:16.775 08:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:10:17.711 The operation has completed successfully. 00:10:17.711 08:28:56 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:10:18.646 The operation has completed successfully. 00:10:18.646 08:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:19.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:19.780 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:19.780 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:19.780 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:20.042 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:20.042 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:10:20.043 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.043 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:20.043 [] 00:10:20.043 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.043 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:10:20.043 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:10:20.043 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:20.043 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:20.043 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:20.043 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.043 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.303 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.303 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:10:20.303 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.303 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.303 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.303 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:10:20.303 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:10:20.303 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.303 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:20.562 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.562 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:10:20.562 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:10:20.563 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "56d50697-3954-4de3-8cee-015bb2bb6b35"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "56d50697-3954-4de3-8cee-015bb2bb6b35",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "fe767890-2e50-408f-bed4-4e3c47efb7a5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fe767890-2e50-408f-bed4-4e3c47efb7a5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "47b75c6b-e590-42b8-bd6b-18c04019fb65"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "47b75c6b-e590-42b8-bd6b-18c04019fb65",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "08bdacbc-fbc4-4feb-a6e5-19777a8fffa9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "08bdacbc-fbc4-4feb-a6e5-19777a8fffa9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "78ec4847-2359-4dc2-81a6-a62dd5a7a9da"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "78ec4847-2359-4dc2-81a6-a62dd5a7a9da",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:20.563 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:10:20.563 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:10:20.563 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:10:20.563 08:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62303 00:10:20.563 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62303 ']' 00:10:20.563 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62303 00:10:20.563 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:10:20.563 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.563 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62303 00:10:20.563 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.563 killing process with pid 62303 00:10:20.563 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.563 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62303' 00:10:20.563 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62303 00:10:20.563 08:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62303 00:10:23.122 08:29:01 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:23.122 08:29:01 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:23.122 08:29:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:23.122 08:29:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.122 08:29:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:23.122 ************************************ 00:10:23.122 START TEST bdev_hello_world 00:10:23.122 ************************************ 00:10:23.122 08:29:01 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:23.122 [2024-11-19 08:29:01.926866] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:23.122 [2024-11-19 08:29:01.927045] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62942 ] 00:10:23.122 [2024-11-19 08:29:02.110479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.122 [2024-11-19 08:29:02.235975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.696 [2024-11-19 08:29:02.852547] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:23.696 [2024-11-19 08:29:02.852620] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:23.696 [2024-11-19 08:29:02.852665] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:23.696 [2024-11-19 08:29:02.855814] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:23.696 [2024-11-19 08:29:02.856361] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:23.696 [2024-11-19 08:29:02.856404] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:23.696 [2024-11-19 08:29:02.856621] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:23.696 00:10:23.696 [2024-11-19 08:29:02.856664] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:24.633 00:10:24.633 real 0m2.024s 00:10:24.633 user 0m1.678s 00:10:24.633 sys 0m0.232s 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:24.633 ************************************ 00:10:24.633 END TEST bdev_hello_world 00:10:24.633 ************************************ 00:10:24.633 08:29:03 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:10:24.633 08:29:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:24.633 08:29:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.633 08:29:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:24.633 ************************************ 00:10:24.633 START TEST bdev_bounds 00:10:24.633 ************************************ 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62983 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:24.633 Process bdevio pid: 62983 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62983' 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62983 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62983 ']' 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.633 08:29:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:24.892 [2024-11-19 08:29:03.991127] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:24.892 [2024-11-19 08:29:03.991302] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62983 ] 00:10:24.892 [2024-11-19 08:29:04.175060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:25.150 [2024-11-19 08:29:04.306690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.150 [2024-11-19 08:29:04.306814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.150 [2024-11-19 08:29:04.306851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.086 08:29:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.086 08:29:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:26.086 08:29:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:26.086 I/O targets: 00:10:26.086 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:26.086 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:10:26.086 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:10:26.086 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:26.086 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:26.086 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:26.086 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:26.086 00:10:26.086 00:10:26.086 CUnit - A unit testing framework for C - Version 2.1-3 00:10:26.086 http://cunit.sourceforge.net/ 00:10:26.086 00:10:26.086 00:10:26.086 Suite: bdevio tests on: Nvme3n1 00:10:26.086 Test: blockdev write read block ...passed 00:10:26.086 Test: blockdev write zeroes read block ...passed 00:10:26.086 Test: blockdev write zeroes read no split ...passed 00:10:26.086 Test: blockdev write zeroes read split ...passed 00:10:26.086 Test: blockdev write zeroes read split partial ...passed 00:10:26.086 Test: blockdev reset ...[2024-11-19 08:29:05.272862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:26.086 [2024-11-19 08:29:05.276574] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:26.086 passed 00:10:26.086 Test: blockdev write read 8 blocks ...passed 00:10:26.086 Test: blockdev write read size > 128k ...passed 00:10:26.086 Test: blockdev write read invalid size ...passed 00:10:26.086 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.086 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.086 Test: blockdev write read max offset ...passed 00:10:26.086 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.086 Test: blockdev writev readv 8 blocks ...passed 00:10:26.086 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.086 Test: blockdev writev readv block ...passed 00:10:26.086 Test: blockdev writev readv size > 128k ...passed 00:10:26.086 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.086 Test: blockdev comparev and writev ...[2024-11-19 08:29:05.283844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3a04000 len:0x1000 00:10:26.086 [2024-11-19 08:29:05.283906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:26.086 passed 00:10:26.086 Test: blockdev nvme passthru rw ...passed 00:10:26.086 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:29:05.284733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:26.086 [2024-11-19 08:29:05.284784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:26.086 passed 00:10:26.086 Test: blockdev nvme admin passthru ...passed 00:10:26.086 Test: blockdev copy ...passed 00:10:26.086 Suite: bdevio tests on: Nvme2n3 00:10:26.086 Test: blockdev write read block ...passed 00:10:26.086 Test: blockdev write zeroes read block ...passed 00:10:26.086 Test: blockdev write zeroes read no split ...passed 00:10:26.086 Test: blockdev write zeroes read split ...passed 00:10:26.086 Test: blockdev write zeroes read split partial ...passed 00:10:26.086 Test: blockdev reset ...[2024-11-19 08:29:05.353313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:26.086 [2024-11-19 08:29:05.357566] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:26.086 passed 00:10:26.086 Test: blockdev write read 8 blocks ...passed 00:10:26.086 Test: blockdev write read size > 128k ...passed 00:10:26.086 Test: blockdev write read invalid size ...passed 00:10:26.086 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.086 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.086 Test: blockdev write read max offset ...passed 00:10:26.086 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.086 Test: blockdev writev readv 8 blocks ...passed 00:10:26.086 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.086 Test: blockdev writev readv block ...passed 00:10:26.086 Test: blockdev writev readv size > 128k ...passed 00:10:26.086 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.086 Test: blockdev comparev and writev ...[2024-11-19 08:29:05.365495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3a02000 len:0x1000 00:10:26.086 [2024-11-19 08:29:05.365559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:26.086 passed 00:10:26.087 Test: blockdev nvme passthru rw ...passed 00:10:26.087 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:29:05.366639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:26.087 [2024-11-19 08:29:05.366679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:26.087 passed 00:10:26.087 Test: blockdev nvme admin passthru ...passed 00:10:26.087 Test: blockdev copy ...passed 00:10:26.087 Suite: bdevio tests on: Nvme2n2 00:10:26.087 Test: blockdev write read block ...passed 00:10:26.087 Test: blockdev write zeroes read block ...passed 00:10:26.346 Test: blockdev write zeroes read no split ...passed 00:10:26.346 Test: blockdev write zeroes read split ...passed 00:10:26.346 Test: blockdev write zeroes read split partial ...passed 00:10:26.346 Test: blockdev reset ...[2024-11-19 08:29:05.433898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:26.346 [2024-11-19 08:29:05.438238] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:26.346 passed 00:10:26.346 Test: blockdev write read 8 blocks ...passed 00:10:26.346 Test: blockdev write read size > 128k ...passed 00:10:26.346 Test: blockdev write read invalid size ...passed 00:10:26.346 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.346 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.346 Test: blockdev write read max offset ...passed 00:10:26.346 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.346 Test: blockdev writev readv 8 blocks ...passed 00:10:26.346 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.346 Test: blockdev writev readv block ...passed 00:10:26.346 Test: blockdev writev readv size > 128k ...passed 00:10:26.346 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.346 Test: blockdev comparev and writev ...[2024-11-19 08:29:05.445831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6038000 len:0x1000 00:10:26.346 [2024-11-19 08:29:05.445908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:26.346 passed 00:10:26.346 Test: blockdev nvme passthru rw ...passed 00:10:26.346 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:29:05.446717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:26.346 [2024-11-19 08:29:05.446758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:26.346 passed 00:10:26.346 Test: blockdev nvme admin passthru ...passed 00:10:26.346 Test: blockdev copy ...passed 00:10:26.346 Suite: bdevio tests on: Nvme2n1 00:10:26.346 Test: blockdev write read block ...passed 00:10:26.346 Test: blockdev write zeroes read block ...passed 00:10:26.346 Test: blockdev write zeroes read no split ...passed 00:10:26.346 Test: blockdev write zeroes read split ...passed 00:10:26.346 Test: blockdev write zeroes read split partial ...passed 00:10:26.346 Test: blockdev reset ...[2024-11-19 08:29:05.525141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:26.346 [2024-11-19 08:29:05.529205] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:26.346 passed 00:10:26.346 Test: blockdev write read 8 blocks ...passed 00:10:26.346 Test: blockdev write read size > 128k ...passed 00:10:26.346 Test: blockdev write read invalid size ...passed 00:10:26.346 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.346 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.346 Test: blockdev write read max offset ...passed 00:10:26.346 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.346 Test: blockdev writev readv 8 blocks ...passed 00:10:26.346 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.346 Test: blockdev writev readv block ...passed 00:10:26.346 Test: blockdev writev readv size > 128k ...passed 00:10:26.346 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.346 Test: blockdev comparev and writev ...[2024-11-19 08:29:05.539371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6034000 len:0x1000 00:10:26.346 [2024-11-19 08:29:05.539587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:26.346 passed 00:10:26.346 Test: blockdev nvme passthru rw ...passed 00:10:26.346 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:29:05.540887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:10:26.346 Test: blockdev nvme admin passthru ...RP2 0x0 00:10:26.346 [2024-11-19 08:29:05.541041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:26.346 passed 00:10:26.346 Test: blockdev copy ...passed 00:10:26.346 Suite: bdevio tests on: Nvme1n1p2 00:10:26.346 Test: blockdev write read block ...passed 00:10:26.346 Test: blockdev write zeroes read block ...passed 00:10:26.346 Test: blockdev write zeroes read no split ...passed 00:10:26.346 Test: blockdev write zeroes read split ...passed 00:10:26.346 Test: blockdev write zeroes read split partial ...passed 00:10:26.347 Test: blockdev reset ...[2024-11-19 08:29:05.615181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:26.347 [2024-11-19 08:29:05.618976] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:26.347 passed 00:10:26.347 Test: blockdev write read 8 blocks ...passed 00:10:26.347 Test: blockdev write read size > 128k ...passed 00:10:26.347 Test: blockdev write read invalid size ...passed 00:10:26.347 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.347 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.347 Test: blockdev write read max offset ...passed 00:10:26.347 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.347 Test: blockdev writev readv 8 blocks ...passed 00:10:26.347 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.347 Test: blockdev writev readv block ...passed 00:10:26.347 Test: blockdev writev readv size > 128k ...passed 00:10:26.347 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.347 Test: blockdev comparev and writev ...passed 00:10:26.347 Test: blockdev nvme passthru rw ...passed 00:10:26.347 Test: blockdev nvme passthru vendor specific ...passed 00:10:26.347 Test: blockdev nvme admin passthru ...passed 00:10:26.347 Test: blockdev copy ...passed 00:10:26.347 Suite: bdevio tests on: Nvme1n1p1 00:10:26.347 Test: blockdev write read block ...passed 00:10:26.347 Test: blockdev write zeroes read block ...passed 00:10:26.347 Test: blockdev write zeroes read no split ...[2024-11-19 08:29:05.628471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d6030000 len:0x1000 00:10:26.347 [2024-11-19 08:29:05.628533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:26.347 passed 00:10:26.606 Test: blockdev write zeroes read split ...passed 00:10:26.606 Test: blockdev write zeroes read split partial ...passed 00:10:26.606 Test: blockdev reset ...[2024-11-19 08:29:05.694928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:26.606 [2024-11-19 08:29:05.698721] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:26.606 passed 00:10:26.606 Test: blockdev write read 8 blocks ...passed 00:10:26.606 Test: blockdev write read size > 128k ...passed 00:10:26.606 Test: blockdev write read invalid size ...passed 00:10:26.606 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.606 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.606 Test: blockdev write read max offset ...passed 00:10:26.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.606 Test: blockdev writev readv 8 blocks ...passed 00:10:26.607 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.607 Test: blockdev writev readv block ...passed 00:10:26.607 Test: blockdev writev readv size > 128k ...passed 00:10:26.607 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.607 Test: blockdev comparev and writev ...[2024-11-19 08:29:05.709400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c3c0e000 len:0x1000 00:10:26.607 [2024-11-19 08:29:05.709476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:26.607 passed 00:10:26.607 Test: blockdev nvme passthru rw ...passed 00:10:26.607 Test: blockdev nvme passthru vendor specific ...passed 00:10:26.607 Test: blockdev nvme admin passthru ...passed 00:10:26.607 Test: blockdev copy ...passed 00:10:26.607 Suite: bdevio tests on: Nvme0n1 00:10:26.607 Test: blockdev write read block ...passed 00:10:26.607 Test: blockdev write zeroes read block ...passed 00:10:26.607 Test: blockdev write zeroes read no split ...passed 00:10:26.607 Test: blockdev write zeroes read split ...passed 00:10:26.607 Test: blockdev write zeroes read split partial ...passed 00:10:26.607 Test: blockdev reset ...[2024-11-19 08:29:05.775745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:26.607 passed 00:10:26.607 Test: blockdev write read 8 blocks ...[2024-11-19 08:29:05.779250] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:26.607 passed 00:10:26.607 Test: blockdev write read size > 128k ...passed 00:10:26.607 Test: blockdev write read invalid size ...passed 00:10:26.607 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:26.607 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:26.607 Test: blockdev write read max offset ...passed 00:10:26.607 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:26.607 Test: blockdev writev readv 8 blocks ...passed 00:10:26.607 Test: blockdev writev readv 30 x 1block ...passed 00:10:26.607 Test: blockdev writev readv block ...passed 00:10:26.607 Test: blockdev writev readv size > 128k ...passed 00:10:26.607 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:26.607 Test: blockdev comparev and writev ...[2024-11-19 08:29:05.789404] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:26.607 separate metadata which is not supported yet. 00:10:26.607 passed 00:10:26.607 Test: blockdev nvme passthru rw ...passed 00:10:26.607 Test: blockdev nvme passthru vendor specific ...[2024-11-19 08:29:05.790393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:26.607 [2024-11-19 08:29:05.790442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:26.607 passed 00:10:26.607 Test: blockdev nvme admin passthru ...passed 00:10:26.607 Test: blockdev copy ...passed 00:10:26.607 00:10:26.607 Run Summary: Type Total Ran Passed Failed Inactive 00:10:26.607 suites 7 7 n/a 0 0 00:10:26.607 tests 161 161 161 0 0 00:10:26.607 asserts 1025 1025 1025 0 n/a 00:10:26.607 00:10:26.607 Elapsed time = 1.581 seconds 00:10:26.607 0 00:10:26.607 08:29:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62983 00:10:26.607 08:29:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62983 ']' 00:10:26.607 08:29:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62983 00:10:26.607 08:29:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:26.607 08:29:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.607 08:29:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62983 00:10:26.607 killing process with pid 62983 00:10:26.607 08:29:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.607 08:29:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.607 08:29:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62983' 00:10:26.607 08:29:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62983 00:10:26.607 08:29:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62983 00:10:27.543 ************************************ 00:10:27.543 END TEST bdev_bounds 00:10:27.543 ************************************ 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:27.543 00:10:27.543 real 0m2.861s 00:10:27.543 user 0m7.550s 00:10:27.543 sys 0m0.354s 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:27.543 08:29:06 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:27.543 08:29:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:27.543 08:29:06 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.543 08:29:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:27.543 ************************************ 00:10:27.543 START TEST bdev_nbd 00:10:27.543 ************************************ 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63044 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:27.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63044 /var/tmp/spdk-nbd.sock 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63044 ']' 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.543 08:29:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:27.802 [2024-11-19 08:29:06.904836] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:27.803 [2024-11-19 08:29:06.904985] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.803 [2024-11-19 08:29:07.079198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.061 [2024-11-19 08:29:07.184868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:28.629 08:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:29.198 1+0 records in 00:10:29.198 1+0 records out 00:10:29.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512548 s, 8.0 MB/s 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:10:29.198 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:29.457 1+0 records in 00:10:29.457 1+0 records out 00:10:29.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625854 s, 6.5 MB/s 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:29.457 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:29.715 1+0 records in 00:10:29.715 1+0 records out 00:10:29.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655502 s, 6.2 MB/s 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:29.715 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:29.716 08:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:30.036 1+0 records in 00:10:30.036 1+0 records out 00:10:30.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598898 s, 6.8 MB/s 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:30.036 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:30.302 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:30.302 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:30.302 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:30.302 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:30.302 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:30.302 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:30.302 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:30.302 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:30.302 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:30.302 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:30.302 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:30.303 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:30.303 1+0 records in 00:10:30.303 1+0 records out 00:10:30.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651691 s, 6.3 MB/s 00:10:30.303 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.303 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:30.303 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.303 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:30.303 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:30.303 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:30.303 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:30.303 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:30.869 1+0 records in 00:10:30.869 1+0 records out 00:10:30.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574228 s, 7.1 MB/s 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:30.869 08:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:31.128 1+0 records in 00:10:31.128 1+0 records out 00:10:31.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572554 s, 7.2 MB/s 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:31.128 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:31.387 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd0", 00:10:31.387 "bdev_name": "Nvme0n1" 00:10:31.387 }, 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd1", 00:10:31.387 "bdev_name": "Nvme1n1p1" 00:10:31.387 }, 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd2", 00:10:31.387 "bdev_name": "Nvme1n1p2" 00:10:31.387 }, 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd3", 00:10:31.387 "bdev_name": "Nvme2n1" 00:10:31.387 }, 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd4", 00:10:31.387 "bdev_name": "Nvme2n2" 00:10:31.387 }, 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd5", 00:10:31.387 "bdev_name": "Nvme2n3" 00:10:31.387 }, 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd6", 00:10:31.387 "bdev_name": "Nvme3n1" 00:10:31.387 } 00:10:31.387 ]' 00:10:31.387 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:31.387 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd0", 00:10:31.387 "bdev_name": "Nvme0n1" 00:10:31.387 }, 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd1", 00:10:31.387 "bdev_name": "Nvme1n1p1" 00:10:31.387 }, 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd2", 00:10:31.387 "bdev_name": "Nvme1n1p2" 00:10:31.387 }, 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd3", 00:10:31.387 "bdev_name": "Nvme2n1" 00:10:31.387 }, 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd4", 00:10:31.387 "bdev_name": "Nvme2n2" 00:10:31.387 }, 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd5", 00:10:31.387 "bdev_name": "Nvme2n3" 00:10:31.387 }, 00:10:31.387 { 00:10:31.387 "nbd_device": "/dev/nbd6", 00:10:31.387 "bdev_name": "Nvme3n1" 00:10:31.387 } 00:10:31.387 ]' 00:10:31.387 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:31.387 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:31.387 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.387 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:31.387 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:31.387 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:31.387 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.387 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:31.646 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:31.646 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:31.646 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:31.646 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.646 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.646 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:31.646 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:31.646 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.646 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.646 08:29:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:32.213 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:32.213 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:32.213 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:32.213 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.213 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.213 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:32.213 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:32.213 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.213 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.213 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:32.472 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:32.472 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:32.472 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:32.472 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.472 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.472 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:32.472 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:32.472 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.472 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.472 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:32.730 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:32.730 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:32.730 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:32.730 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.730 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.731 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:32.731 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:32.731 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.731 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.731 08:29:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:32.989 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:32.989 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:32.989 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:32.989 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.989 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.989 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:32.989 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:32.989 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.989 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.989 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:33.247 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:33.247 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:33.247 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:33.247 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:33.247 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:33.247 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:33.247 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:33.247 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:33.247 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:33.247 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:33.505 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:33.505 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:33.505 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:33.505 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:33.505 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:33.506 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:33.506 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:33.506 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:33.506 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:33.506 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.506 08:29:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:33.764 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:33.764 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:33.764 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:34.022 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:34.023 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:34.023 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:34.023 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:34.023 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:34.023 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:34.023 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:34.281 /dev/nbd0 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.281 1+0 records in 00:10:34.281 1+0 records out 00:10:34.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619351 s, 6.6 MB/s 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:34.281 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:10:34.539 /dev/nbd1 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.539 1+0 records in 00:10:34.539 1+0 records out 00:10:34.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620824 s, 6.6 MB/s 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:34.539 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.540 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.540 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:34.540 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:34.540 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:34.540 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:10:34.798 /dev/nbd10 00:10:34.798 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:34.798 08:29:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:34.798 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:34.798 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:34.798 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.798 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.798 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:34.798 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:34.798 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.798 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.798 08:29:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.798 1+0 records in 00:10:34.798 1+0 records out 00:10:34.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580268 s, 7.1 MB/s 00:10:34.798 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.798 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:34.798 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.798 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.798 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:34.798 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:34.798 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:34.798 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:35.056 /dev/nbd11 00:10:35.056 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:35.056 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:35.056 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:35.056 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:35.056 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:35.056 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:35.057 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:35.315 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:35.315 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:35.315 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:35.315 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.315 1+0 records in 00:10:35.315 1+0 records out 00:10:35.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068719 s, 6.0 MB/s 00:10:35.315 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.315 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:35.315 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.315 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:35.315 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:35.315 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:35.315 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:35.315 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:35.573 /dev/nbd12 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.573 1+0 records in 00:10:35.573 1+0 records out 00:10:35.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072082 s, 5.7 MB/s 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:35.573 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:35.831 /dev/nbd13 00:10:35.831 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:35.831 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.832 1+0 records in 00:10:35.832 1+0 records out 00:10:35.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635383 s, 6.4 MB/s 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:35.832 08:29:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:36.090 /dev/nbd14 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.091 1+0 records in 00:10:36.091 1+0 records out 00:10:36.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000971544 s, 4.2 MB/s 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.091 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:36.369 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd0", 00:10:36.369 "bdev_name": "Nvme0n1" 00:10:36.369 }, 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd1", 00:10:36.369 "bdev_name": "Nvme1n1p1" 00:10:36.369 }, 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd10", 00:10:36.369 "bdev_name": "Nvme1n1p2" 00:10:36.369 }, 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd11", 00:10:36.369 "bdev_name": "Nvme2n1" 00:10:36.369 }, 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd12", 00:10:36.369 "bdev_name": "Nvme2n2" 00:10:36.369 }, 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd13", 00:10:36.369 "bdev_name": "Nvme2n3" 00:10:36.369 }, 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd14", 00:10:36.369 "bdev_name": "Nvme3n1" 00:10:36.369 } 00:10:36.369 ]' 00:10:36.369 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:36.369 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd0", 00:10:36.369 "bdev_name": "Nvme0n1" 00:10:36.369 }, 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd1", 00:10:36.369 "bdev_name": "Nvme1n1p1" 00:10:36.369 }, 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd10", 00:10:36.369 "bdev_name": "Nvme1n1p2" 00:10:36.369 }, 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd11", 00:10:36.369 "bdev_name": "Nvme2n1" 00:10:36.369 }, 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd12", 00:10:36.369 "bdev_name": "Nvme2n2" 00:10:36.369 }, 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd13", 00:10:36.369 "bdev_name": "Nvme2n3" 00:10:36.369 }, 00:10:36.369 { 00:10:36.369 "nbd_device": "/dev/nbd14", 00:10:36.369 "bdev_name": "Nvme3n1" 00:10:36.369 } 00:10:36.369 ]' 00:10:36.369 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:36.369 /dev/nbd1 00:10:36.369 /dev/nbd10 00:10:36.369 /dev/nbd11 00:10:36.369 /dev/nbd12 00:10:36.369 /dev/nbd13 00:10:36.369 /dev/nbd14' 00:10:36.369 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:36.369 /dev/nbd1 00:10:36.369 /dev/nbd10 00:10:36.370 /dev/nbd11 00:10:36.370 /dev/nbd12 00:10:36.370 /dev/nbd13 00:10:36.370 /dev/nbd14' 00:10:36.370 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:36.629 256+0 records in 00:10:36.629 256+0 records out 00:10:36.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00995822 s, 105 MB/s 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:36.629 256+0 records in 00:10:36.629 256+0 records out 00:10:36.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180112 s, 5.8 MB/s 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:36.629 08:29:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:36.965 256+0 records in 00:10:36.965 256+0 records out 00:10:36.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173511 s, 6.0 MB/s 00:10:36.965 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:36.965 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:36.965 256+0 records in 00:10:36.965 256+0 records out 00:10:36.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170078 s, 6.2 MB/s 00:10:36.965 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:36.965 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:37.240 256+0 records in 00:10:37.240 256+0 records out 00:10:37.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150316 s, 7.0 MB/s 00:10:37.240 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.240 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:37.240 256+0 records in 00:10:37.240 256+0 records out 00:10:37.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153253 s, 6.8 MB/s 00:10:37.240 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.240 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:37.498 256+0 records in 00:10:37.498 256+0 records out 00:10:37.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146779 s, 7.1 MB/s 00:10:37.498 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.498 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:37.758 256+0 records in 00:10:37.758 256+0 records out 00:10:37.758 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145616 s, 7.2 MB/s 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:37.758 08:29:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:38.017 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:38.017 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:38.017 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:38.017 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:38.017 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:38.017 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:38.017 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:38.017 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:38.017 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.017 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:38.275 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:38.275 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:38.275 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:38.275 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:38.275 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:38.275 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:38.275 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:38.275 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:38.275 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.275 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:38.534 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:38.534 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:38.534 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:38.534 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:38.534 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:38.534 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:38.534 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:38.534 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:38.534 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.534 08:29:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:39.103 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:39.103 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:39.103 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:39.103 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.103 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.103 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:39.103 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.103 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.103 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.103 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:39.362 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:39.362 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:39.362 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:39.362 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.362 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.362 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:39.362 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.362 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.362 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.362 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:39.620 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:39.620 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:39.620 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:39.620 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.620 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.621 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:39.621 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.621 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.621 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.621 08:29:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:39.879 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:39.879 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:39.879 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:39.879 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.879 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.879 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:39.879 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.879 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.879 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:39.879 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.879 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:40.137 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:40.395 malloc_lvol_verify 00:10:40.395 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:40.962 ecb7660e-cf03-468c-aaf9-0f306cd292d9 00:10:40.962 08:29:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:41.221 7ac1900c-96da-4773-af43-e44a1ea48f57 00:10:41.221 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:41.481 /dev/nbd0 00:10:41.481 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:41.481 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:41.481 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:41.481 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:41.481 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:41.481 mke2fs 1.47.0 (5-Feb-2023) 00:10:41.481 Discarding device blocks: 0/4096 done 00:10:41.481 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:41.481 00:10:41.481 Allocating group tables: 0/1 done 00:10:41.481 Writing inode tables: 0/1 done 00:10:41.481 Creating journal (1024 blocks): done 00:10:41.481 Writing superblocks and filesystem accounting information: 0/1 done 00:10:41.481 00:10:41.481 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:41.481 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.481 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:41.481 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:41.481 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:41.481 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.481 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63044 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63044 ']' 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63044 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63044 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.740 killing process with pid 63044 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63044' 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63044 00:10:41.740 08:29:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63044 00:10:43.118 08:29:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:43.118 00:10:43.118 real 0m15.194s 00:10:43.118 user 0m22.133s 00:10:43.118 sys 0m4.726s 00:10:43.118 08:29:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.118 08:29:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:43.118 ************************************ 00:10:43.118 END TEST bdev_nbd 00:10:43.118 ************************************ 00:10:43.118 08:29:22 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:10:43.118 08:29:22 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:10:43.118 08:29:22 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:10:43.118 skipping fio tests on NVMe due to multi-ns failures. 00:10:43.118 08:29:22 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:43.118 08:29:22 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:43.118 08:29:22 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:43.118 08:29:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:43.118 08:29:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.118 08:29:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:43.118 ************************************ 00:10:43.118 START TEST bdev_verify 00:10:43.118 ************************************ 00:10:43.118 08:29:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:43.118 [2024-11-19 08:29:22.157588] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:43.118 [2024-11-19 08:29:22.158402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63504 ] 00:10:43.118 [2024-11-19 08:29:22.337441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:43.377 [2024-11-19 08:29:22.441323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.377 [2024-11-19 08:29:22.441344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.944 Running I/O for 5 seconds... 00:10:46.287 20096.00 IOPS, 78.50 MiB/s [2024-11-19T08:29:26.519Z] 18912.00 IOPS, 73.88 MiB/s [2024-11-19T08:29:27.454Z] 19050.67 IOPS, 74.42 MiB/s [2024-11-19T08:29:28.391Z] 19168.00 IOPS, 74.88 MiB/s [2024-11-19T08:29:28.391Z] 18956.80 IOPS, 74.05 MiB/s 00:10:49.095 Latency(us) 00:10:49.095 [2024-11-19T08:29:28.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.095 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0x0 length 0xbd0bd 00:10:49.095 Nvme0n1 : 5.10 1356.48 5.30 0.00 0.00 94124.43 21090.68 88652.33 00:10:49.095 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:49.095 Nvme0n1 : 5.07 1313.71 5.13 0.00 0.00 97151.95 22282.24 91988.71 00:10:49.095 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0x0 length 0x4ff80 00:10:49.095 Nvme1n1p1 : 5.10 1355.96 5.30 0.00 0.00 93963.30 20494.89 87699.08 00:10:49.095 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0x4ff80 length 0x4ff80 00:10:49.095 Nvme1n1p1 : 5.07 1313.13 5.13 0.00 0.00 97024.55 24307.90 91035.46 00:10:49.095 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0x0 length 0x4ff7f 00:10:49.095 Nvme1n1p2 : 5.10 1355.00 5.29 0.00 0.00 93831.84 21924.77 85315.96 00:10:49.095 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:10:49.095 Nvme1n1p2 : 5.07 1312.61 5.13 0.00 0.00 96860.14 25022.84 91512.09 00:10:49.095 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0x0 length 0x80000 00:10:49.095 Nvme2n1 : 5.10 1354.59 5.29 0.00 0.00 93694.76 22163.08 82456.20 00:10:49.095 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0x80000 length 0x80000 00:10:49.095 Nvme2n1 : 5.07 1312.09 5.13 0.00 0.00 96682.65 24427.05 91035.46 00:10:49.095 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0x0 length 0x80000 00:10:49.095 Nvme2n2 : 5.10 1354.12 5.29 0.00 0.00 93582.47 21686.46 82932.83 00:10:49.095 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0x80000 length 0x80000 00:10:49.095 Nvme2n2 : 5.07 1311.56 5.12 0.00 0.00 96499.87 23473.80 91035.46 00:10:49.095 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0x0 length 0x80000 00:10:49.095 Nvme2n3 : 5.11 1353.61 5.29 0.00 0.00 93444.68 20494.89 83886.08 00:10:49.095 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0x80000 length 0x80000 00:10:49.095 Nvme2n3 : 5.09 1320.93 5.16 0.00 0.00 95696.78 4379.00 90082.21 00:10:49.095 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0x0 length 0x20000 00:10:49.095 Nvme3n1 : 5.11 1353.10 5.29 0.00 0.00 93310.16 13226.36 86745.83 00:10:49.095 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.095 Verification LBA range: start 0x20000 length 0x20000 00:10:49.095 Nvme3n1 : 5.09 1320.47 5.16 0.00 0.00 95554.64 3932.16 91988.71 00:10:49.095 [2024-11-19T08:29:28.391Z] =================================================================================================================== 00:10:49.095 [2024-11-19T08:29:28.391Z] Total : 18687.38 73.00 0.00 0.00 95076.02 3932.16 91988.71 00:10:50.469 00:10:50.469 real 0m7.596s 00:10:50.469 user 0m14.058s 00:10:50.469 sys 0m0.263s 00:10:50.469 08:29:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.469 08:29:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:50.469 ************************************ 00:10:50.469 END TEST bdev_verify 00:10:50.469 ************************************ 00:10:50.469 08:29:29 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:50.470 08:29:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:50.470 08:29:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.470 08:29:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:50.470 ************************************ 00:10:50.470 START TEST bdev_verify_big_io 00:10:50.470 ************************************ 00:10:50.470 08:29:29 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:50.729 [2024-11-19 08:29:29.806635] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:50.729 [2024-11-19 08:29:29.806797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63608 ] 00:10:50.729 [2024-11-19 08:29:29.991090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:50.988 [2024-11-19 08:29:30.094838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.988 [2024-11-19 08:29:30.094841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.924 Running I/O for 5 seconds... 00:10:55.845 576.00 IOPS, 36.00 MiB/s [2024-11-19T08:29:37.043Z] 1424.00 IOPS, 89.00 MiB/s [2024-11-19T08:29:37.302Z] 2316.33 IOPS, 144.77 MiB/s 00:10:58.006 Latency(us) 00:10:58.006 [2024-11-19T08:29:37.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.006 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0x0 length 0xbd0b 00:10:58.006 Nvme0n1 : 5.83 109.86 6.87 0.00 0.00 1126050.26 25380.31 1204909.15 00:10:58.006 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:58.006 Nvme0n1 : 5.97 94.47 5.90 0.00 0.00 1260855.91 17635.14 1761607.68 00:10:58.006 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0x0 length 0x4ff8 00:10:58.006 Nvme1n1p1 : 5.93 90.71 5.67 0.00 0.00 1307809.75 91035.46 1837867.75 00:10:58.006 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0x4ff8 length 0x4ff8 00:10:58.006 Nvme1n1p1 : 6.02 103.46 6.47 0.00 0.00 1133195.76 33363.78 1166779.11 00:10:58.006 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0x0 length 0x4ff7 00:10:58.006 Nvme1n1p2 : 6.09 70.97 4.44 0.00 0.00 1621929.94 152520.15 2120030.02 00:10:58.006 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0x4ff7 length 0x4ff7 00:10:58.006 Nvme1n1p2 : 6.02 102.42 6.40 0.00 0.00 1126844.93 48615.80 1853119.77 00:10:58.006 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0x0 length 0x8000 00:10:58.006 Nvme2n1 : 5.93 112.40 7.02 0.00 0.00 1007114.59 102474.47 1121023.07 00:10:58.006 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0x8000 length 0x8000 00:10:58.006 Nvme2n1 : 6.08 113.08 7.07 0.00 0.00 993040.30 56480.12 1258291.20 00:10:58.006 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0x0 length 0x8000 00:10:58.006 Nvme2n2 : 5.98 117.63 7.35 0.00 0.00 938341.98 46232.67 1143901.09 00:10:58.006 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0x8000 length 0x8000 00:10:58.006 Nvme2n2 : 6.11 107.73 6.73 0.00 0.00 1001659.36 57195.05 1937005.85 00:10:58.006 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0x0 length 0x8000 00:10:58.006 Nvme2n3 : 6.08 126.29 7.89 0.00 0.00 851351.43 51237.24 1174405.12 00:10:58.006 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0x8000 length 0x8000 00:10:58.006 Nvme2n3 : 6.13 116.07 7.25 0.00 0.00 903790.67 15073.28 1967509.88 00:10:58.006 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0x0 length 0x2000 00:10:58.006 Nvme3n1 : 6.10 136.48 8.53 0.00 0.00 766142.02 3619.37 1204909.15 00:10:58.006 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:58.006 Verification LBA range: start 0x2000 length 0x2000 00:10:58.006 Nvme3n1 : 6.24 151.19 9.45 0.00 0.00 681086.31 808.03 1998013.91 00:10:58.006 [2024-11-19T08:29:37.302Z] =================================================================================================================== 00:10:58.006 [2024-11-19T08:29:37.302Z] Total : 1552.76 97.05 0.00 0.00 1010948.76 808.03 2120030.02 00:10:59.933 00:10:59.933 real 0m9.124s 00:10:59.933 user 0m17.065s 00:10:59.933 sys 0m0.286s 00:10:59.933 08:29:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.933 08:29:38 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:59.933 ************************************ 00:10:59.933 END TEST bdev_verify_big_io 00:10:59.933 ************************************ 00:10:59.933 08:29:38 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:59.933 08:29:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:59.933 08:29:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.933 08:29:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:59.933 ************************************ 00:10:59.933 START TEST bdev_write_zeroes 00:10:59.933 ************************************ 00:10:59.933 08:29:38 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:59.933 [2024-11-19 08:29:38.959847] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:10:59.934 [2024-11-19 08:29:38.960017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63723 ] 00:10:59.934 [2024-11-19 08:29:39.134938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.192 [2024-11-19 08:29:39.242400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.758 Running I/O for 1 seconds... 00:11:01.693 39424.00 IOPS, 154.00 MiB/s 00:11:01.693 Latency(us) 00:11:01.693 [2024-11-19T08:29:40.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.693 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.693 Nvme0n1 : 1.03 5675.83 22.17 0.00 0.00 22489.34 14000.87 52428.80 00:11:01.693 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.693 Nvme1n1p1 : 1.03 5668.31 22.14 0.00 0.00 22485.53 14239.19 52190.49 00:11:01.693 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.693 Nvme1n1p2 : 1.03 5660.80 22.11 0.00 0.00 22449.76 13941.29 52667.11 00:11:01.693 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.693 Nvme2n1 : 1.03 5653.91 22.09 0.00 0.00 22332.97 12690.15 51713.86 00:11:01.693 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.693 Nvme2n2 : 1.03 5647.07 22.06 0.00 0.00 22315.15 12392.26 51713.86 00:11:01.693 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.693 Nvme2n3 : 1.03 5640.27 22.03 0.00 0.00 22292.82 12153.95 51713.86 00:11:01.693 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:01.693 Nvme3n1 : 1.03 5633.43 22.01 0.00 0.00 22242.92 9711.24 52190.49 00:11:01.693 [2024-11-19T08:29:40.989Z] =================================================================================================================== 00:11:01.693 [2024-11-19T08:29:40.989Z] Total : 39579.63 154.61 0.00 0.00 22372.64 9711.24 52667.11 00:11:03.068 00:11:03.068 real 0m3.080s 00:11:03.068 user 0m2.745s 00:11:03.068 sys 0m0.209s 00:11:03.068 08:29:41 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.068 08:29:41 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:03.068 ************************************ 00:11:03.068 END TEST bdev_write_zeroes 00:11:03.068 ************************************ 00:11:03.068 08:29:41 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:03.068 08:29:41 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:03.068 08:29:41 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.068 08:29:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:03.068 ************************************ 00:11:03.068 START TEST bdev_json_nonenclosed 00:11:03.068 ************************************ 00:11:03.068 08:29:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:03.068 [2024-11-19 08:29:42.114846] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:03.068 [2024-11-19 08:29:42.114998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63776 ] 00:11:03.068 [2024-11-19 08:29:42.295050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.326 [2024-11-19 08:29:42.481135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.326 [2024-11-19 08:29:42.481261] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:03.326 [2024-11-19 08:29:42.481298] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:03.326 [2024-11-19 08:29:42.481315] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:03.584 00:11:03.584 real 0m0.739s 00:11:03.584 user 0m0.513s 00:11:03.584 sys 0m0.119s 00:11:03.584 08:29:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.584 08:29:42 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:03.584 ************************************ 00:11:03.584 END TEST bdev_json_nonenclosed 00:11:03.584 ************************************ 00:11:03.584 08:29:42 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:03.584 08:29:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:03.584 08:29:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.585 08:29:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:03.585 ************************************ 00:11:03.585 START TEST bdev_json_nonarray 00:11:03.585 ************************************ 00:11:03.585 08:29:42 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:03.842 [2024-11-19 08:29:42.898519] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:03.842 [2024-11-19 08:29:42.898715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63801 ] 00:11:03.842 [2024-11-19 08:29:43.088818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.156 [2024-11-19 08:29:43.218874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.156 [2024-11-19 08:29:43.219035] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:04.156 [2024-11-19 08:29:43.219072] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:04.156 [2024-11-19 08:29:43.219090] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:04.415 00:11:04.415 real 0m0.723s 00:11:04.415 user 0m0.473s 00:11:04.415 sys 0m0.143s 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:04.415 ************************************ 00:11:04.415 END TEST bdev_json_nonarray 00:11:04.415 ************************************ 00:11:04.415 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:11:04.415 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:11:04.415 08:29:43 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:04.415 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:04.415 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.415 08:29:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:04.415 ************************************ 00:11:04.415 START TEST bdev_gpt_uuid 00:11:04.415 ************************************ 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63831 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63831 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63831 ']' 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.415 08:29:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:04.673 [2024-11-19 08:29:43.716655] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:04.673 [2024-11-19 08:29:43.716891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63831 ] 00:11:04.673 [2024-11-19 08:29:43.902990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.931 [2024-11-19 08:29:44.054089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.867 08:29:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.867 08:29:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:11:05.867 08:29:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:05.867 08:29:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.867 08:29:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:06.125 Some configs were skipped because the RPC state that can call them passed over. 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:11:06.125 { 00:11:06.125 "name": "Nvme1n1p1", 00:11:06.125 "aliases": [ 00:11:06.125 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:06.125 ], 00:11:06.125 "product_name": "GPT Disk", 00:11:06.125 "block_size": 4096, 00:11:06.125 "num_blocks": 655104, 00:11:06.125 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:06.125 "assigned_rate_limits": { 00:11:06.125 "rw_ios_per_sec": 0, 00:11:06.125 "rw_mbytes_per_sec": 0, 00:11:06.125 "r_mbytes_per_sec": 0, 00:11:06.125 "w_mbytes_per_sec": 0 00:11:06.125 }, 00:11:06.125 "claimed": false, 00:11:06.125 "zoned": false, 00:11:06.125 "supported_io_types": { 00:11:06.125 "read": true, 00:11:06.125 "write": true, 00:11:06.125 "unmap": true, 00:11:06.125 "flush": true, 00:11:06.125 "reset": true, 00:11:06.125 "nvme_admin": false, 00:11:06.125 "nvme_io": false, 00:11:06.125 "nvme_io_md": false, 00:11:06.125 "write_zeroes": true, 00:11:06.125 "zcopy": false, 00:11:06.125 "get_zone_info": false, 00:11:06.125 "zone_management": false, 00:11:06.125 "zone_append": false, 00:11:06.125 "compare": true, 00:11:06.125 "compare_and_write": false, 00:11:06.125 "abort": true, 00:11:06.125 "seek_hole": false, 00:11:06.125 "seek_data": false, 00:11:06.125 "copy": true, 00:11:06.125 "nvme_iov_md": false 00:11:06.125 }, 00:11:06.125 "driver_specific": { 00:11:06.125 "gpt": { 00:11:06.125 "base_bdev": "Nvme1n1", 00:11:06.125 "offset_blocks": 256, 00:11:06.125 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:06.125 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:06.125 "partition_name": "SPDK_TEST_first" 00:11:06.125 } 00:11:06.125 } 00:11:06.125 } 00:11:06.125 ]' 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.125 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:11:06.384 { 00:11:06.384 "name": "Nvme1n1p2", 00:11:06.384 "aliases": [ 00:11:06.384 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:06.384 ], 00:11:06.384 "product_name": "GPT Disk", 00:11:06.384 "block_size": 4096, 00:11:06.384 "num_blocks": 655103, 00:11:06.384 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:06.384 "assigned_rate_limits": { 00:11:06.384 "rw_ios_per_sec": 0, 00:11:06.384 "rw_mbytes_per_sec": 0, 00:11:06.384 "r_mbytes_per_sec": 0, 00:11:06.384 "w_mbytes_per_sec": 0 00:11:06.384 }, 00:11:06.384 "claimed": false, 00:11:06.384 "zoned": false, 00:11:06.384 "supported_io_types": { 00:11:06.384 "read": true, 00:11:06.384 "write": true, 00:11:06.384 "unmap": true, 00:11:06.384 "flush": true, 00:11:06.384 "reset": true, 00:11:06.384 "nvme_admin": false, 00:11:06.384 "nvme_io": false, 00:11:06.384 "nvme_io_md": false, 00:11:06.384 "write_zeroes": true, 00:11:06.384 "zcopy": false, 00:11:06.384 "get_zone_info": false, 00:11:06.384 "zone_management": false, 00:11:06.384 "zone_append": false, 00:11:06.384 "compare": true, 00:11:06.384 "compare_and_write": false, 00:11:06.384 "abort": true, 00:11:06.384 "seek_hole": false, 00:11:06.384 "seek_data": false, 00:11:06.384 "copy": true, 00:11:06.384 "nvme_iov_md": false 00:11:06.384 }, 00:11:06.384 "driver_specific": { 00:11:06.384 "gpt": { 00:11:06.384 "base_bdev": "Nvme1n1", 00:11:06.384 "offset_blocks": 655360, 00:11:06.384 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:06.384 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:06.384 "partition_name": "SPDK_TEST_second" 00:11:06.384 } 00:11:06.384 } 00:11:06.384 } 00:11:06.384 ]' 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63831 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63831 ']' 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63831 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63831 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.384 killing process with pid 63831 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63831' 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63831 00:11:06.384 08:29:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63831 00:11:08.915 00:11:08.915 real 0m4.173s 00:11:08.915 user 0m4.577s 00:11:08.915 sys 0m0.456s 00:11:08.915 08:29:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.915 08:29:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:08.915 ************************************ 00:11:08.915 END TEST bdev_gpt_uuid 00:11:08.915 ************************************ 00:11:08.915 08:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:11:08.915 08:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:11:08.915 08:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:11:08.915 08:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:08.915 08:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:08.915 08:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:08.915 08:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:08.915 08:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:08.915 08:29:47 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:08.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:08.915 Waiting for block devices as requested 00:11:09.173 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.173 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.173 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.431 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:14.747 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:14.747 08:29:53 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:11:14.747 08:29:53 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:11:14.747 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:14.747 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:14.747 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:14.747 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:14.747 08:29:53 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:14.747 00:11:14.747 real 1m5.438s 00:11:14.747 user 1m25.584s 00:11:14.747 sys 0m9.777s 00:11:14.747 08:29:53 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.747 08:29:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:14.747 ************************************ 00:11:14.747 END TEST blockdev_nvme_gpt 00:11:14.747 ************************************ 00:11:14.747 08:29:53 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:14.747 08:29:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:14.747 08:29:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.747 08:29:53 -- common/autotest_common.sh@10 -- # set +x 00:11:14.747 ************************************ 00:11:14.747 START TEST nvme 00:11:14.747 ************************************ 00:11:14.747 08:29:53 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:14.747 * Looking for test storage... 00:11:14.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:14.747 08:29:53 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:14.747 08:29:53 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:14.747 08:29:53 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:15.006 08:29:54 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:15.006 08:29:54 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.006 08:29:54 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.006 08:29:54 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.006 08:29:54 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.006 08:29:54 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.006 08:29:54 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.006 08:29:54 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.006 08:29:54 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.006 08:29:54 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.006 08:29:54 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.006 08:29:54 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.006 08:29:54 nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:15.006 08:29:54 nvme -- scripts/common.sh@345 -- # : 1 00:11:15.006 08:29:54 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.006 08:29:54 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.006 08:29:54 nvme -- scripts/common.sh@365 -- # decimal 1 00:11:15.006 08:29:54 nvme -- scripts/common.sh@353 -- # local d=1 00:11:15.006 08:29:54 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.006 08:29:54 nvme -- scripts/common.sh@355 -- # echo 1 00:11:15.006 08:29:54 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.006 08:29:54 nvme -- scripts/common.sh@366 -- # decimal 2 00:11:15.006 08:29:54 nvme -- scripts/common.sh@353 -- # local d=2 00:11:15.006 08:29:54 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.006 08:29:54 nvme -- scripts/common.sh@355 -- # echo 2 00:11:15.006 08:29:54 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.006 08:29:54 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.006 08:29:54 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.006 08:29:54 nvme -- scripts/common.sh@368 -- # return 0 00:11:15.006 08:29:54 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.006 08:29:54 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:15.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.006 --rc genhtml_branch_coverage=1 00:11:15.006 --rc genhtml_function_coverage=1 00:11:15.006 --rc genhtml_legend=1 00:11:15.006 --rc geninfo_all_blocks=1 00:11:15.006 --rc geninfo_unexecuted_blocks=1 00:11:15.006 00:11:15.006 ' 00:11:15.006 08:29:54 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:15.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.006 --rc genhtml_branch_coverage=1 00:11:15.006 --rc genhtml_function_coverage=1 00:11:15.006 --rc genhtml_legend=1 00:11:15.006 --rc geninfo_all_blocks=1 00:11:15.006 --rc geninfo_unexecuted_blocks=1 00:11:15.006 00:11:15.006 ' 00:11:15.006 08:29:54 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:15.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.006 --rc genhtml_branch_coverage=1 00:11:15.006 --rc genhtml_function_coverage=1 00:11:15.006 --rc genhtml_legend=1 00:11:15.006 --rc geninfo_all_blocks=1 00:11:15.006 --rc geninfo_unexecuted_blocks=1 00:11:15.006 00:11:15.006 ' 00:11:15.006 08:29:54 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:15.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.006 --rc genhtml_branch_coverage=1 00:11:15.006 --rc genhtml_function_coverage=1 00:11:15.006 --rc genhtml_legend=1 00:11:15.006 --rc geninfo_all_blocks=1 00:11:15.006 --rc geninfo_unexecuted_blocks=1 00:11:15.006 00:11:15.006 ' 00:11:15.006 08:29:54 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:15.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:16.138 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.138 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.138 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.138 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:16.138 08:29:55 nvme -- nvme/nvme.sh@79 -- # uname 00:11:16.138 08:29:55 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:16.138 08:29:55 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:16.138 08:29:55 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:16.138 08:29:55 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:16.138 08:29:55 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:11:16.138 08:29:55 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:11:16.138 08:29:55 nvme -- common/autotest_common.sh@1075 -- # stubpid=64484 00:11:16.138 08:29:55 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:16.138 Waiting for stub to ready for secondary processes... 00:11:16.138 08:29:55 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:11:16.138 08:29:55 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:16.138 08:29:55 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64484 ]] 00:11:16.138 08:29:55 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:16.138 [2024-11-19 08:29:55.412006] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:11:16.138 [2024-11-19 08:29:55.412209] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:17.144 [2024-11-19 08:29:56.220223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:17.144 [2024-11-19 08:29:56.347133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.144 [2024-11-19 08:29:56.347238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.144 [2024-11-19 08:29:56.347252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.144 08:29:56 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:17.144 08:29:56 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64484 ]] 00:11:17.144 08:29:56 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:17.144 [2024-11-19 08:29:56.370679] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:17.144 [2024-11-19 08:29:56.370812] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:17.144 [2024-11-19 08:29:56.381896] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:17.144 [2024-11-19 08:29:56.382433] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:17.144 [2024-11-19 08:29:56.386043] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:17.144 [2024-11-19 08:29:56.386326] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:17.144 [2024-11-19 08:29:56.386431] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:17.144 [2024-11-19 08:29:56.389061] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:17.144 [2024-11-19 08:29:56.389301] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:17.144 [2024-11-19 08:29:56.389397] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:17.144 [2024-11-19 08:29:56.393046] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:17.144 [2024-11-19 08:29:56.393356] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:17.144 [2024-11-19 08:29:56.393460] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:17.144 [2024-11-19 08:29:56.393524] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:17.144 [2024-11-19 08:29:56.393586] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:18.079 08:29:57 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:18.079 done. 00:11:18.079 08:29:57 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:11:18.079 08:29:57 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:18.079 08:29:57 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:11:18.079 08:29:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.079 08:29:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:18.337 ************************************ 00:11:18.337 START TEST nvme_reset 00:11:18.337 ************************************ 00:11:18.337 08:29:57 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:18.595 Initializing NVMe Controllers 00:11:18.595 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:18.595 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:18.595 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:18.595 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:18.595 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:18.595 00:11:18.595 real 0m0.417s 00:11:18.595 user 0m0.188s 00:11:18.595 sys 0m0.183s 00:11:18.595 08:29:57 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.595 08:29:57 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:18.595 ************************************ 00:11:18.595 END TEST nvme_reset 00:11:18.595 ************************************ 00:11:18.595 08:29:57 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:18.595 08:29:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:18.595 08:29:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.595 08:29:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:18.595 ************************************ 00:11:18.595 START TEST nvme_identify 00:11:18.595 ************************************ 00:11:18.595 08:29:57 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:11:18.595 08:29:57 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:18.595 08:29:57 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:18.595 08:29:57 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:18.595 08:29:57 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:18.595 08:29:57 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:18.595 08:29:57 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:11:18.595 08:29:57 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:18.595 08:29:57 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:18.595 08:29:57 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:18.854 08:29:57 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:18.854 08:29:57 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:18.855 08:29:57 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:19.117 [2024-11-19 08:29:58.160789] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64513 terminated unexpected 00:11:19.117 ===================================================== 00:11:19.117 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:19.117 ===================================================== 00:11:19.117 Controller Capabilities/Features 00:11:19.117 ================================ 00:11:19.117 Vendor ID: 1b36 00:11:19.117 Subsystem Vendor ID: 1af4 00:11:19.117 Serial Number: 12340 00:11:19.117 Model Number: QEMU NVMe Ctrl 00:11:19.117 Firmware Version: 8.0.0 00:11:19.117 Recommended Arb Burst: 6 00:11:19.117 IEEE OUI Identifier: 00 54 52 00:11:19.117 Multi-path I/O 00:11:19.117 May have multiple subsystem ports: No 00:11:19.117 May have multiple controllers: No 00:11:19.117 Associated with SR-IOV VF: No 00:11:19.117 Max Data Transfer Size: 524288 00:11:19.117 Max Number of Namespaces: 256 00:11:19.117 Max Number of I/O Queues: 64 00:11:19.117 NVMe Specification Version (VS): 1.4 00:11:19.117 NVMe Specification Version (Identify): 1.4 00:11:19.117 Maximum Queue Entries: 2048 00:11:19.117 Contiguous Queues Required: Yes 00:11:19.117 Arbitration Mechanisms Supported 00:11:19.117 Weighted Round Robin: Not Supported 00:11:19.117 Vendor Specific: Not Supported 00:11:19.117 Reset Timeout: 7500 ms 00:11:19.117 Doorbell Stride: 4 bytes 00:11:19.117 NVM Subsystem Reset: Not Supported 00:11:19.117 Command Sets Supported 00:11:19.117 NVM Command Set: Supported 00:11:19.117 Boot Partition: Not Supported 00:11:19.117 Memory Page Size Minimum: 4096 bytes 00:11:19.117 Memory Page Size Maximum: 65536 bytes 00:11:19.117 Persistent Memory Region: Not Supported 00:11:19.117 Optional Asynchronous Events Supported 00:11:19.117 Namespace Attribute Notices: Supported 00:11:19.117 Firmware Activation Notices: Not Supported 00:11:19.117 ANA Change Notices: Not Supported 00:11:19.117 PLE Aggregate Log Change Notices: Not Supported 00:11:19.117 LBA Status Info Alert Notices: Not Supported 00:11:19.117 EGE Aggregate Log Change Notices: Not Supported 00:11:19.117 Normal NVM Subsystem Shutdown event: Not Supported 00:11:19.117 Zone Descriptor Change Notices: Not Supported 00:11:19.117 Discovery Log Change Notices: Not Supported 00:11:19.117 Controller Attributes 00:11:19.117 128-bit Host Identifier: Not Supported 00:11:19.117 Non-Operational Permissive Mode: Not Supported 00:11:19.117 NVM Sets: Not Supported 00:11:19.117 Read Recovery Levels: Not Supported 00:11:19.117 Endurance Groups: Not Supported 00:11:19.117 Predictable Latency Mode: Not Supported 00:11:19.117 Traffic Based Keep ALive: Not Supported 00:11:19.117 Namespace Granularity: Not Supported 00:11:19.117 SQ Associations: Not Supported 00:11:19.117 UUID List: Not Supported 00:11:19.117 Multi-Domain Subsystem: Not Supported 00:11:19.117 Fixed Capacity Management: Not Supported 00:11:19.117 Variable Capacity Management: Not Supported 00:11:19.117 Delete Endurance Group: Not Supported 00:11:19.117 Delete NVM Set: Not Supported 00:11:19.117 Extended LBA Formats Supported: Supported 00:11:19.117 Flexible Data Placement Supported: Not Supported 00:11:19.117 00:11:19.117 Controller Memory Buffer Support 00:11:19.117 ================================ 00:11:19.117 Supported: No 00:11:19.117 00:11:19.117 Persistent Memory Region Support 00:11:19.117 ================================ 00:11:19.117 Supported: No 00:11:19.117 00:11:19.117 Admin Command Set Attributes 00:11:19.117 ============================ 00:11:19.117 Security Send/Receive: Not Supported 00:11:19.117 Format NVM: Supported 00:11:19.117 Firmware Activate/Download: Not Supported 00:11:19.117 Namespace Management: Supported 00:11:19.117 Device Self-Test: Not Supported 00:11:19.117 Directives: Supported 00:11:19.117 NVMe-MI: Not Supported 00:11:19.117 Virtualization Management: Not Supported 00:11:19.117 Doorbell Buffer Config: Supported 00:11:19.117 Get LBA Status Capability: Not Supported 00:11:19.117 Command & Feature Lockdown Capability: Not Supported 00:11:19.117 Abort Command Limit: 4 00:11:19.117 Async Event Request Limit: 4 00:11:19.117 Number of Firmware Slots: N/A 00:11:19.117 Firmware Slot 1 Read-Only: N/A 00:11:19.117 Firmware Activation Without Reset: N/A 00:11:19.117 Multiple Update Detection Support: N/A 00:11:19.117 Firmware Update Granularity: No Information Provided 00:11:19.117 Per-Namespace SMART Log: Yes 00:11:19.117 Asymmetric Namespace Access Log Page: Not Supported 00:11:19.117 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:19.117 Command Effects Log Page: Supported 00:11:19.117 Get Log Page Extended Data: Supported 00:11:19.117 Telemetry Log Pages: Not Supported 00:11:19.117 Persistent Event Log Pages: Not Supported 00:11:19.117 Supported Log Pages Log Page: May Support 00:11:19.117 Commands Supported & Effects Log Page: Not Supported 00:11:19.117 Feature Identifiers & Effects Log Page:May Support 00:11:19.117 NVMe-MI Commands & Effects Log Page: May Support 00:11:19.117 Data Area 4 for Telemetry Log: Not Supported 00:11:19.117 Error Log Page Entries Supported: 1 00:11:19.117 Keep Alive: Not Supported 00:11:19.118 00:11:19.118 NVM Command Set Attributes 00:11:19.118 ========================== 00:11:19.118 Submission Queue Entry Size 00:11:19.118 Max: 64 00:11:19.118 Min: 64 00:11:19.118 Completion Queue Entry Size 00:11:19.118 Max: 16 00:11:19.118 Min: 16 00:11:19.118 Number of Namespaces: 256 00:11:19.118 Compare Command: Supported 00:11:19.118 Write Uncorrectable Command: Not Supported 00:11:19.118 Dataset Management Command: Supported 00:11:19.118 Write Zeroes Command: Supported 00:11:19.118 Set Features Save Field: Supported 00:11:19.118 Reservations: Not Supported 00:11:19.118 Timestamp: Supported 00:11:19.118 Copy: Supported 00:11:19.118 Volatile Write Cache: Present 00:11:19.118 Atomic Write Unit (Normal): 1 00:11:19.118 Atomic Write Unit (PFail): 1 00:11:19.118 Atomic Compare & Write Unit: 1 00:11:19.118 Fused Compare & Write: Not Supported 00:11:19.118 Scatter-Gather List 00:11:19.118 SGL Command Set: Supported 00:11:19.118 SGL Keyed: Not Supported 00:11:19.118 SGL Bit Bucket Descriptor: Not Supported 00:11:19.118 SGL Metadata Pointer: Not Supported 00:11:19.118 Oversized SGL: Not Supported 00:11:19.118 SGL Metadata Address: Not Supported 00:11:19.118 SGL Offset: Not Supported 00:11:19.118 Transport SGL Data Block: Not Supported 00:11:19.118 Replay Protected Memory Block: Not Supported 00:11:19.118 00:11:19.118 Firmware Slot Information 00:11:19.118 ========================= 00:11:19.118 Active slot: 1 00:11:19.118 Slot 1 Firmware Revision: 1.0 00:11:19.118 00:11:19.118 00:11:19.118 Commands Supported and Effects 00:11:19.118 ============================== 00:11:19.118 Admin Commands 00:11:19.118 -------------- 00:11:19.118 Delete I/O Submission Queue (00h): Supported 00:11:19.118 Create I/O Submission Queue (01h): Supported 00:11:19.118 Get Log Page (02h): Supported 00:11:19.118 Delete I/O Completion Queue (04h): Supported 00:11:19.118 Create I/O Completion Queue (05h): Supported 00:11:19.118 Identify (06h): Supported 00:11:19.118 Abort (08h): Supported 00:11:19.118 Set Features (09h): Supported 00:11:19.118 Get Features (0Ah): Supported 00:11:19.118 Asynchronous Event Request (0Ch): Supported 00:11:19.118 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:19.118 Directive Send (19h): Supported 00:11:19.118 Directive Receive (1Ah): Supported 00:11:19.118 Virtualization Management (1Ch): Supported 00:11:19.118 Doorbell Buffer Config (7Ch): Supported 00:11:19.118 Format NVM (80h): Supported LBA-Change 00:11:19.118 I/O Commands 00:11:19.118 ------------ 00:11:19.118 Flush (00h): Supported LBA-Change 00:11:19.118 Write (01h): Supported LBA-Change 00:11:19.118 Read (02h): Supported 00:11:19.118 Compare (05h): Supported 00:11:19.118 Write Zeroes (08h): Supported LBA-Change 00:11:19.118 Dataset Management (09h): Supported LBA-Change 00:11:19.118 Unknown (0Ch): Supported 00:11:19.118 Unknown (12h): Supported 00:11:19.118 Copy (19h): Supported LBA-Change 00:11:19.118 Unknown (1Dh): Supported LBA-Change 00:11:19.118 00:11:19.118 Error Log 00:11:19.118 ========= 00:11:19.118 00:11:19.118 Arbitration 00:11:19.118 =========== 00:11:19.118 Arbitration Burst: no limit 00:11:19.118 00:11:19.118 Power Management 00:11:19.118 ================ 00:11:19.118 Number of Power States: 1 00:11:19.118 Current Power State: Power State #0 00:11:19.118 Power State #0: 00:11:19.118 Max Power: 25.00 W 00:11:19.118 Non-Operational State: Operational 00:11:19.118 Entry Latency: 16 microseconds 00:11:19.118 Exit Latency: 4 microseconds 00:11:19.118 Relative Read Throughput: 0 00:11:19.118 Relative Read Latency: 0 00:11:19.118 Relative Write Throughput: 0 00:11:19.118 Relative Write Latency: 0 00:11:19.118 Idle Power[2024-11-19 08:29:58.162318] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64513 terminated unexpected 00:11:19.118 : Not Reported 00:11:19.118 Active Power: Not Reported 00:11:19.118 Non-Operational Permissive Mode: Not Supported 00:11:19.118 00:11:19.118 Health Information 00:11:19.118 ================== 00:11:19.118 Critical Warnings: 00:11:19.118 Available Spare Space: OK 00:11:19.118 Temperature: OK 00:11:19.118 Device Reliability: OK 00:11:19.118 Read Only: No 00:11:19.118 Volatile Memory Backup: OK 00:11:19.118 Current Temperature: 323 Kelvin (50 Celsius) 00:11:19.118 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:19.118 Available Spare: 0% 00:11:19.118 Available Spare Threshold: 0% 00:11:19.118 Life Percentage Used: 0% 00:11:19.118 Data Units Read: 648 00:11:19.118 Data Units Written: 576 00:11:19.118 Host Read Commands: 33160 00:11:19.118 Host Write Commands: 32946 00:11:19.118 Controller Busy Time: 0 minutes 00:11:19.118 Power Cycles: 0 00:11:19.118 Power On Hours: 0 hours 00:11:19.118 Unsafe Shutdowns: 0 00:11:19.118 Unrecoverable Media Errors: 0 00:11:19.118 Lifetime Error Log Entries: 0 00:11:19.118 Warning Temperature Time: 0 minutes 00:11:19.118 Critical Temperature Time: 0 minutes 00:11:19.118 00:11:19.118 Number of Queues 00:11:19.118 ================ 00:11:19.118 Number of I/O Submission Queues: 64 00:11:19.118 Number of I/O Completion Queues: 64 00:11:19.118 00:11:19.118 ZNS Specific Controller Data 00:11:19.118 ============================ 00:11:19.118 Zone Append Size Limit: 0 00:11:19.118 00:11:19.118 00:11:19.118 Active Namespaces 00:11:19.118 ================= 00:11:19.118 Namespace ID:1 00:11:19.118 Error Recovery Timeout: Unlimited 00:11:19.118 Command Set Identifier: NVM (00h) 00:11:19.118 Deallocate: Supported 00:11:19.118 Deallocated/Unwritten Error: Supported 00:11:19.118 Deallocated Read Value: All 0x00 00:11:19.118 Deallocate in Write Zeroes: Not Supported 00:11:19.118 Deallocated Guard Field: 0xFFFF 00:11:19.118 Flush: Supported 00:11:19.118 Reservation: Not Supported 00:11:19.118 Metadata Transferred as: Separate Metadata Buffer 00:11:19.118 Namespace Sharing Capabilities: Private 00:11:19.118 Size (in LBAs): 1548666 (5GiB) 00:11:19.118 Capacity (in LBAs): 1548666 (5GiB) 00:11:19.118 Utilization (in LBAs): 1548666 (5GiB) 00:11:19.118 Thin Provisioning: Not Supported 00:11:19.118 Per-NS Atomic Units: No 00:11:19.118 Maximum Single Source Range Length: 128 00:11:19.118 Maximum Copy Length: 128 00:11:19.118 Maximum Source Range Count: 128 00:11:19.118 NGUID/EUI64 Never Reused: No 00:11:19.118 Namespace Write Protected: No 00:11:19.118 Number of LBA Formats: 8 00:11:19.118 Current LBA Format: LBA Format #07 00:11:19.118 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:19.118 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:19.118 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:19.118 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:19.118 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:19.118 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:19.118 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:19.118 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:19.118 00:11:19.118 NVM Specific Namespace Data 00:11:19.118 =========================== 00:11:19.118 Logical Block Storage Tag Mask: 0 00:11:19.118 Protection Information Capabilities: 00:11:19.118 16b Guard Protection Information Storage Tag Support: No 00:11:19.118 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:19.118 Storage Tag Check Read Support: No 00:11:19.118 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.118 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.118 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.118 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.118 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.118 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.118 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.118 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.118 ===================================================== 00:11:19.118 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:19.118 ===================================================== 00:11:19.118 Controller Capabilities/Features 00:11:19.118 ================================ 00:11:19.118 Vendor ID: 1b36 00:11:19.118 Subsystem Vendor ID: 1af4 00:11:19.118 Serial Number: 12341 00:11:19.118 Model Number: QEMU NVMe Ctrl 00:11:19.118 Firmware Version: 8.0.0 00:11:19.118 Recommended Arb Burst: 6 00:11:19.118 IEEE OUI Identifier: 00 54 52 00:11:19.118 Multi-path I/O 00:11:19.118 May have multiple subsystem ports: No 00:11:19.118 May have multiple controllers: No 00:11:19.118 Associated with SR-IOV VF: No 00:11:19.118 Max Data Transfer Size: 524288 00:11:19.119 Max Number of Namespaces: 256 00:11:19.119 Max Number of I/O Queues: 64 00:11:19.119 NVMe Specification Version (VS): 1.4 00:11:19.119 NVMe Specification Version (Identify): 1.4 00:11:19.119 Maximum Queue Entries: 2048 00:11:19.119 Contiguous Queues Required: Yes 00:11:19.119 Arbitration Mechanisms Supported 00:11:19.119 Weighted Round Robin: Not Supported 00:11:19.119 Vendor Specific: Not Supported 00:11:19.119 Reset Timeout: 7500 ms 00:11:19.119 Doorbell Stride: 4 bytes 00:11:19.119 NVM Subsystem Reset: Not Supported 00:11:19.119 Command Sets Supported 00:11:19.119 NVM Command Set: Supported 00:11:19.119 Boot Partition: Not Supported 00:11:19.119 Memory Page Size Minimum: 4096 bytes 00:11:19.119 Memory Page Size Maximum: 65536 bytes 00:11:19.119 Persistent Memory Region: Not Supported 00:11:19.119 Optional Asynchronous Events Supported 00:11:19.119 Namespace Attribute Notices: Supported 00:11:19.119 Firmware Activation Notices: Not Supported 00:11:19.119 ANA Change Notices: Not Supported 00:11:19.119 PLE Aggregate Log Change Notices: Not Supported 00:11:19.119 LBA Status Info Alert Notices: Not Supported 00:11:19.119 EGE Aggregate Log Change Notices: Not Supported 00:11:19.119 Normal NVM Subsystem Shutdown event: Not Supported 00:11:19.119 Zone Descriptor Change Notices: Not Supported 00:11:19.119 Discovery Log Change Notices: Not Supported 00:11:19.119 Controller Attributes 00:11:19.119 128-bit Host Identifier: Not Supported 00:11:19.119 Non-Operational Permissive Mode: Not Supported 00:11:19.119 NVM Sets: Not Supported 00:11:19.119 Read Recovery Levels: Not Supported 00:11:19.119 Endurance Groups: Not Supported 00:11:19.119 Predictable Latency Mode: Not Supported 00:11:19.119 Traffic Based Keep ALive: Not Supported 00:11:19.119 Namespace Granularity: Not Supported 00:11:19.119 SQ Associations: Not Supported 00:11:19.119 UUID List: Not Supported 00:11:19.119 Multi-Domain Subsystem: Not Supported 00:11:19.119 Fixed Capacity Management: Not Supported 00:11:19.119 Variable Capacity Management: Not Supported 00:11:19.119 Delete Endurance Group: Not Supported 00:11:19.119 Delete NVM Set: Not Supported 00:11:19.119 Extended LBA Formats Supported: Supported 00:11:19.119 Flexible Data Placement Supported: Not Supported 00:11:19.119 00:11:19.119 Controller Memory Buffer Support 00:11:19.119 ================================ 00:11:19.119 Supported: No 00:11:19.119 00:11:19.119 Persistent Memory Region Support 00:11:19.119 ================================ 00:11:19.119 Supported: No 00:11:19.119 00:11:19.119 Admin Command Set Attributes 00:11:19.119 ============================ 00:11:19.119 Security Send/Receive: Not Supported 00:11:19.119 Format NVM: Supported 00:11:19.119 Firmware Activate/Download: Not Supported 00:11:19.119 Namespace Management: Supported 00:11:19.119 Device Self-Test: Not Supported 00:11:19.119 Directives: Supported 00:11:19.119 NVMe-MI: Not Supported 00:11:19.119 Virtualization Management: Not Supported 00:11:19.119 Doorbell Buffer Config: Supported 00:11:19.119 Get LBA Status Capability: Not Supported 00:11:19.119 Command & Feature Lockdown Capability: Not Supported 00:11:19.119 Abort Command Limit: 4 00:11:19.119 Async Event Request Limit: 4 00:11:19.119 Number of Firmware Slots: N/A 00:11:19.119 Firmware Slot 1 Read-Only: N/A 00:11:19.119 Firmware Activation Without Reset: N/A 00:11:19.119 Multiple Update Detection Support: N/A 00:11:19.119 Firmware Update Granularity: No Information Provided 00:11:19.119 Per-Namespace SMART Log: Yes 00:11:19.119 Asymmetric Namespace Access Log Page: Not Supported 00:11:19.119 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:19.119 Command Effects Log Page: Supported 00:11:19.119 Get Log Page Extended Data: Supported 00:11:19.119 Telemetry Log Pages: Not Supported 00:11:19.119 Persistent Event Log Pages: Not Supported 00:11:19.119 Supported Log Pages Log Page: May Support 00:11:19.119 Commands Supported & Effects Log Page: Not Supported 00:11:19.119 Feature Identifiers & Effects Log Page:May Support 00:11:19.119 NVMe-MI Commands & Effects Log Page: May Support 00:11:19.119 Data Area 4 for Telemetry Log: Not Supported 00:11:19.119 Error Log Page Entries Supported: 1 00:11:19.119 Keep Alive: Not Supported 00:11:19.119 00:11:19.119 NVM Command Set Attributes 00:11:19.119 ========================== 00:11:19.119 Submission Queue Entry Size 00:11:19.119 Max: 64 00:11:19.119 Min: 64 00:11:19.119 Completion Queue Entry Size 00:11:19.119 Max: 16 00:11:19.119 Min: 16 00:11:19.119 Number of Namespaces: 256 00:11:19.119 Compare Command: Supported 00:11:19.119 Write Uncorrectable Command: Not Supported 00:11:19.119 Dataset Management Command: Supported 00:11:19.119 Write Zeroes Command: Supported 00:11:19.119 Set Features Save Field: Supported 00:11:19.119 Reservations: Not Supported 00:11:19.119 Timestamp: Supported 00:11:19.119 Copy: Supported 00:11:19.119 Volatile Write Cache: Present 00:11:19.119 Atomic Write Unit (Normal): 1 00:11:19.119 Atomic Write Unit (PFail): 1 00:11:19.119 Atomic Compare & Write Unit: 1 00:11:19.119 Fused Compare & Write: Not Supported 00:11:19.119 Scatter-Gather List 00:11:19.119 SGL Command Set: Supported 00:11:19.119 SGL Keyed: Not Supported 00:11:19.119 SGL Bit Bucket Descriptor: Not Supported 00:11:19.119 SGL Metadata Pointer: Not Supported 00:11:19.119 Oversized SGL: Not Supported 00:11:19.119 SGL Metadata Address: Not Supported 00:11:19.119 SGL Offset: Not Supported 00:11:19.119 Transport SGL Data Block: Not Supported 00:11:19.119 Replay Protected Memory Block: Not Supported 00:11:19.119 00:11:19.119 Firmware Slot Information 00:11:19.119 ========================= 00:11:19.119 Active slot: 1 00:11:19.119 Slot 1 Firmware Revision: 1.0 00:11:19.119 00:11:19.119 00:11:19.119 Commands Supported and Effects 00:11:19.119 ============================== 00:11:19.119 Admin Commands 00:11:19.119 -------------- 00:11:19.119 Delete I/O Submission Queue (00h): Supported 00:11:19.119 Create I/O Submission Queue (01h): Supported 00:11:19.119 Get Log Page (02h): Supported 00:11:19.119 Delete I/O Completion Queue (04h): Supported 00:11:19.119 Create I/O Completion Queue (05h): Supported 00:11:19.119 Identify (06h): Supported 00:11:19.119 Abort (08h): Supported 00:11:19.119 Set Features (09h): Supported 00:11:19.119 Get Features (0Ah): Supported 00:11:19.119 Asynchronous Event Request (0Ch): Supported 00:11:19.119 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:19.119 Directive Send (19h): Supported 00:11:19.119 Directive Receive (1Ah): Supported 00:11:19.119 Virtualization Management (1Ch): Supported 00:11:19.119 Doorbell Buffer Config (7Ch): Supported 00:11:19.119 Format NVM (80h): Supported LBA-Change 00:11:19.119 I/O Commands 00:11:19.119 ------------ 00:11:19.119 Flush (00h): Supported LBA-Change 00:11:19.119 Write (01h): Supported LBA-Change 00:11:19.119 Read (02h): Supported 00:11:19.119 Compare (05h): Supported 00:11:19.119 Write Zeroes (08h): Supported LBA-Change 00:11:19.119 Dataset Management (09h): Supported LBA-Change 00:11:19.119 Unknown (0Ch): Supported 00:11:19.119 Unknown (12h): Supported 00:11:19.119 Copy (19h): Supported LBA-Change 00:11:19.119 Unknown (1Dh): Supported LBA-Change 00:11:19.119 00:11:19.119 Error Log 00:11:19.119 ========= 00:11:19.119 00:11:19.119 Arbitration 00:11:19.119 =========== 00:11:19.119 Arbitration Burst: no limit 00:11:19.119 00:11:19.119 Power Management 00:11:19.119 ================ 00:11:19.119 Number of Power States: 1 00:11:19.119 Current Power State: Power State #0 00:11:19.119 Power State #0: 00:11:19.119 Max Power: 25.00 W 00:11:19.119 Non-Operational State: Operational 00:11:19.119 Entry Latency: 16 microseconds 00:11:19.119 Exit Latency: 4 microseconds 00:11:19.119 Relative Read Throughput: 0 00:11:19.119 Relative Read Latency: 0 00:11:19.119 Relative Write Throughput: 0 00:11:19.119 Relative Write Latency: 0 00:11:19.119 Idle Power: Not Reported 00:11:19.119 Active Power: Not Reported 00:11:19.119 Non-Operational Permissive Mode: Not Supported 00:11:19.119 00:11:19.119 Health Information 00:11:19.119 ================== 00:11:19.119 Critical Warnings: 00:11:19.119 Available Spare Space: OK 00:11:19.119 Temperature: [2024-11-19 08:29:58.163364] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64513 terminated unexpected 00:11:19.119 OK 00:11:19.119 Device Reliability: OK 00:11:19.119 Read Only: No 00:11:19.119 Volatile Memory Backup: OK 00:11:19.119 Current Temperature: 323 Kelvin (50 Celsius) 00:11:19.119 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:19.119 Available Spare: 0% 00:11:19.119 Available Spare Threshold: 0% 00:11:19.119 Life Percentage Used: 0% 00:11:19.120 Data Units Read: 950 00:11:19.120 Data Units Written: 817 00:11:19.120 Host Read Commands: 48903 00:11:19.120 Host Write Commands: 47669 00:11:19.120 Controller Busy Time: 0 minutes 00:11:19.120 Power Cycles: 0 00:11:19.120 Power On Hours: 0 hours 00:11:19.120 Unsafe Shutdowns: 0 00:11:19.120 Unrecoverable Media Errors: 0 00:11:19.120 Lifetime Error Log Entries: 0 00:11:19.120 Warning Temperature Time: 0 minutes 00:11:19.120 Critical Temperature Time: 0 minutes 00:11:19.120 00:11:19.120 Number of Queues 00:11:19.120 ================ 00:11:19.120 Number of I/O Submission Queues: 64 00:11:19.120 Number of I/O Completion Queues: 64 00:11:19.120 00:11:19.120 ZNS Specific Controller Data 00:11:19.120 ============================ 00:11:19.120 Zone Append Size Limit: 0 00:11:19.120 00:11:19.120 00:11:19.120 Active Namespaces 00:11:19.120 ================= 00:11:19.120 Namespace ID:1 00:11:19.120 Error Recovery Timeout: Unlimited 00:11:19.120 Command Set Identifier: NVM (00h) 00:11:19.120 Deallocate: Supported 00:11:19.120 Deallocated/Unwritten Error: Supported 00:11:19.120 Deallocated Read Value: All 0x00 00:11:19.120 Deallocate in Write Zeroes: Not Supported 00:11:19.120 Deallocated Guard Field: 0xFFFF 00:11:19.120 Flush: Supported 00:11:19.120 Reservation: Not Supported 00:11:19.120 Namespace Sharing Capabilities: Private 00:11:19.120 Size (in LBAs): 1310720 (5GiB) 00:11:19.120 Capacity (in LBAs): 1310720 (5GiB) 00:11:19.120 Utilization (in LBAs): 1310720 (5GiB) 00:11:19.120 Thin Provisioning: Not Supported 00:11:19.120 Per-NS Atomic Units: No 00:11:19.120 Maximum Single Source Range Length: 128 00:11:19.120 Maximum Copy Length: 128 00:11:19.120 Maximum Source Range Count: 128 00:11:19.120 NGUID/EUI64 Never Reused: No 00:11:19.120 Namespace Write Protected: No 00:11:19.120 Number of LBA Formats: 8 00:11:19.120 Current LBA Format: LBA Format #04 00:11:19.120 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:19.120 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:19.120 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:19.120 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:19.120 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:19.120 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:19.120 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:19.120 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:19.120 00:11:19.120 NVM Specific Namespace Data 00:11:19.120 =========================== 00:11:19.120 Logical Block Storage Tag Mask: 0 00:11:19.120 Protection Information Capabilities: 00:11:19.120 16b Guard Protection Information Storage Tag Support: No 00:11:19.120 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:19.120 Storage Tag Check Read Support: No 00:11:19.120 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.120 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.120 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.120 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.120 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.120 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.120 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.120 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.120 ===================================================== 00:11:19.120 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:19.120 ===================================================== 00:11:19.120 Controller Capabilities/Features 00:11:19.120 ================================ 00:11:19.120 Vendor ID: 1b36 00:11:19.120 Subsystem Vendor ID: 1af4 00:11:19.120 Serial Number: 12343 00:11:19.120 Model Number: QEMU NVMe Ctrl 00:11:19.120 Firmware Version: 8.0.0 00:11:19.120 Recommended Arb Burst: 6 00:11:19.120 IEEE OUI Identifier: 00 54 52 00:11:19.120 Multi-path I/O 00:11:19.120 May have multiple subsystem ports: No 00:11:19.120 May have multiple controllers: Yes 00:11:19.120 Associated with SR-IOV VF: No 00:11:19.120 Max Data Transfer Size: 524288 00:11:19.120 Max Number of Namespaces: 256 00:11:19.120 Max Number of I/O Queues: 64 00:11:19.120 NVMe Specification Version (VS): 1.4 00:11:19.120 NVMe Specification Version (Identify): 1.4 00:11:19.120 Maximum Queue Entries: 2048 00:11:19.120 Contiguous Queues Required: Yes 00:11:19.120 Arbitration Mechanisms Supported 00:11:19.120 Weighted Round Robin: Not Supported 00:11:19.120 Vendor Specific: Not Supported 00:11:19.120 Reset Timeout: 7500 ms 00:11:19.120 Doorbell Stride: 4 bytes 00:11:19.120 NVM Subsystem Reset: Not Supported 00:11:19.120 Command Sets Supported 00:11:19.120 NVM Command Set: Supported 00:11:19.120 Boot Partition: Not Supported 00:11:19.120 Memory Page Size Minimum: 4096 bytes 00:11:19.120 Memory Page Size Maximum: 65536 bytes 00:11:19.120 Persistent Memory Region: Not Supported 00:11:19.120 Optional Asynchronous Events Supported 00:11:19.120 Namespace Attribute Notices: Supported 00:11:19.120 Firmware Activation Notices: Not Supported 00:11:19.120 ANA Change Notices: Not Supported 00:11:19.120 PLE Aggregate Log Change Notices: Not Supported 00:11:19.120 LBA Status Info Alert Notices: Not Supported 00:11:19.120 EGE Aggregate Log Change Notices: Not Supported 00:11:19.120 Normal NVM Subsystem Shutdown event: Not Supported 00:11:19.120 Zone Descriptor Change Notices: Not Supported 00:11:19.120 Discovery Log Change Notices: Not Supported 00:11:19.120 Controller Attributes 00:11:19.120 128-bit Host Identifier: Not Supported 00:11:19.120 Non-Operational Permissive Mode: Not Supported 00:11:19.120 NVM Sets: Not Supported 00:11:19.120 Read Recovery Levels: Not Supported 00:11:19.120 Endurance Groups: Supported 00:11:19.120 Predictable Latency Mode: Not Supported 00:11:19.120 Traffic Based Keep ALive: Not Supported 00:11:19.120 Namespace Granularity: Not Supported 00:11:19.120 SQ Associations: Not Supported 00:11:19.120 UUID List: Not Supported 00:11:19.120 Multi-Domain Subsystem: Not Supported 00:11:19.120 Fixed Capacity Management: Not Supported 00:11:19.120 Variable Capacity Management: Not Supported 00:11:19.120 Delete Endurance Group: Not Supported 00:11:19.120 Delete NVM Set: Not Supported 00:11:19.120 Extended LBA Formats Supported: Supported 00:11:19.120 Flexible Data Placement Supported: Supported 00:11:19.120 00:11:19.120 Controller Memory Buffer Support 00:11:19.120 ================================ 00:11:19.120 Supported: No 00:11:19.120 00:11:19.120 Persistent Memory Region Support 00:11:19.120 ================================ 00:11:19.120 Supported: No 00:11:19.120 00:11:19.120 Admin Command Set Attributes 00:11:19.120 ============================ 00:11:19.120 Security Send/Receive: Not Supported 00:11:19.120 Format NVM: Supported 00:11:19.120 Firmware Activate/Download: Not Supported 00:11:19.120 Namespace Management: Supported 00:11:19.120 Device Self-Test: Not Supported 00:11:19.120 Directives: Supported 00:11:19.120 NVMe-MI: Not Supported 00:11:19.120 Virtualization Management: Not Supported 00:11:19.120 Doorbell Buffer Config: Supported 00:11:19.120 Get LBA Status Capability: Not Supported 00:11:19.120 Command & Feature Lockdown Capability: Not Supported 00:11:19.120 Abort Command Limit: 4 00:11:19.120 Async Event Request Limit: 4 00:11:19.120 Number of Firmware Slots: N/A 00:11:19.120 Firmware Slot 1 Read-Only: N/A 00:11:19.120 Firmware Activation Without Reset: N/A 00:11:19.120 Multiple Update Detection Support: N/A 00:11:19.120 Firmware Update Granularity: No Information Provided 00:11:19.120 Per-Namespace SMART Log: Yes 00:11:19.120 Asymmetric Namespace Access Log Page: Not Supported 00:11:19.120 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:19.120 Command Effects Log Page: Supported 00:11:19.120 Get Log Page Extended Data: Supported 00:11:19.120 Telemetry Log Pages: Not Supported 00:11:19.120 Persistent Event Log Pages: Not Supported 00:11:19.120 Supported Log Pages Log Page: May Support 00:11:19.120 Commands Supported & Effects Log Page: Not Supported 00:11:19.120 Feature Identifiers & Effects Log Page:May Support 00:11:19.120 NVMe-MI Commands & Effects Log Page: May Support 00:11:19.120 Data Area 4 for Telemetry Log: Not Supported 00:11:19.120 Error Log Page Entries Supported: 1 00:11:19.120 Keep Alive: Not Supported 00:11:19.120 00:11:19.120 NVM Command Set Attributes 00:11:19.120 ========================== 00:11:19.120 Submission Queue Entry Size 00:11:19.120 Max: 64 00:11:19.120 Min: 64 00:11:19.120 Completion Queue Entry Size 00:11:19.120 Max: 16 00:11:19.120 Min: 16 00:11:19.120 Number of Namespaces: 256 00:11:19.120 Compare Command: Supported 00:11:19.120 Write Uncorrectable Command: Not Supported 00:11:19.121 Dataset Management Command: Supported 00:11:19.121 Write Zeroes Command: Supported 00:11:19.121 Set Features Save Field: Supported 00:11:19.121 Reservations: Not Supported 00:11:19.121 Timestamp: Supported 00:11:19.121 Copy: Supported 00:11:19.121 Volatile Write Cache: Present 00:11:19.121 Atomic Write Unit (Normal): 1 00:11:19.121 Atomic Write Unit (PFail): 1 00:11:19.121 Atomic Compare & Write Unit: 1 00:11:19.121 Fused Compare & Write: Not Supported 00:11:19.121 Scatter-Gather List 00:11:19.121 SGL Command Set: Supported 00:11:19.121 SGL Keyed: Not Supported 00:11:19.121 SGL Bit Bucket Descriptor: Not Supported 00:11:19.121 SGL Metadata Pointer: Not Supported 00:11:19.121 Oversized SGL: Not Supported 00:11:19.121 SGL Metadata Address: Not Supported 00:11:19.121 SGL Offset: Not Supported 00:11:19.121 Transport SGL Data Block: Not Supported 00:11:19.121 Replay Protected Memory Block: Not Supported 00:11:19.121 00:11:19.121 Firmware Slot Information 00:11:19.121 ========================= 00:11:19.121 Active slot: 1 00:11:19.121 Slot 1 Firmware Revision: 1.0 00:11:19.121 00:11:19.121 00:11:19.121 Commands Supported and Effects 00:11:19.121 ============================== 00:11:19.121 Admin Commands 00:11:19.121 -------------- 00:11:19.121 Delete I/O Submission Queue (00h): Supported 00:11:19.121 Create I/O Submission Queue (01h): Supported 00:11:19.121 Get Log Page (02h): Supported 00:11:19.121 Delete I/O Completion Queue (04h): Supported 00:11:19.121 Create I/O Completion Queue (05h): Supported 00:11:19.121 Identify (06h): Supported 00:11:19.121 Abort (08h): Supported 00:11:19.121 Set Features (09h): Supported 00:11:19.121 Get Features (0Ah): Supported 00:11:19.121 Asynchronous Event Request (0Ch): Supported 00:11:19.121 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:19.121 Directive Send (19h): Supported 00:11:19.121 Directive Receive (1Ah): Supported 00:11:19.121 Virtualization Management (1Ch): Supported 00:11:19.121 Doorbell Buffer Config (7Ch): Supported 00:11:19.121 Format NVM (80h): Supported LBA-Change 00:11:19.121 I/O Commands 00:11:19.121 ------------ 00:11:19.121 Flush (00h): Supported LBA-Change 00:11:19.121 Write (01h): Supported LBA-Change 00:11:19.121 Read (02h): Supported 00:11:19.121 Compare (05h): Supported 00:11:19.121 Write Zeroes (08h): Supported LBA-Change 00:11:19.121 Dataset Management (09h): Supported LBA-Change 00:11:19.121 Unknown (0Ch): Supported 00:11:19.121 Unknown (12h): Supported 00:11:19.121 Copy (19h): Supported LBA-Change 00:11:19.121 Unknown (1Dh): Supported LBA-Change 00:11:19.121 00:11:19.121 Error Log 00:11:19.121 ========= 00:11:19.121 00:11:19.121 Arbitration 00:11:19.121 =========== 00:11:19.121 Arbitration Burst: no limit 00:11:19.121 00:11:19.121 Power Management 00:11:19.121 ================ 00:11:19.121 Number of Power States: 1 00:11:19.121 Current Power State: Power State #0 00:11:19.121 Power State #0: 00:11:19.121 Max Power: 25.00 W 00:11:19.121 Non-Operational State: Operational 00:11:19.121 Entry Latency: 16 microseconds 00:11:19.121 Exit Latency: 4 microseconds 00:11:19.121 Relative Read Throughput: 0 00:11:19.121 Relative Read Latency: 0 00:11:19.121 Relative Write Throughput: 0 00:11:19.121 Relative Write Latency: 0 00:11:19.121 Idle Power: Not Reported 00:11:19.121 Active Power: Not Reported 00:11:19.121 Non-Operational Permissive Mode: Not Supported 00:11:19.121 00:11:19.121 Health Information 00:11:19.121 ================== 00:11:19.121 Critical Warnings: 00:11:19.121 Available Spare Space: OK 00:11:19.121 Temperature: OK 00:11:19.121 Device Reliability: OK 00:11:19.121 Read Only: No 00:11:19.121 Volatile Memory Backup: OK 00:11:19.121 Current Temperature: 323 Kelvin (50 Celsius) 00:11:19.121 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:19.121 Available Spare: 0% 00:11:19.121 Available Spare Threshold: 0% 00:11:19.121 Life Percentage Used: 0% 00:11:19.121 Data Units Read: 769 00:11:19.121 Data Units Written: 698 00:11:19.121 Host Read Commands: 34468 00:11:19.121 Host Write Commands: 33891 00:11:19.121 Controller Busy Time: 0 minutes 00:11:19.121 Power Cycles: 0 00:11:19.121 Power On Hours: 0 hours 00:11:19.121 Unsafe Shutdowns: 0 00:11:19.121 Unrecoverable Media Errors: 0 00:11:19.121 Lifetime Error Log Entries: 0 00:11:19.121 Warning Temperature Time: 0 minutes 00:11:19.121 Critical Temperature Time: 0 minutes 00:11:19.121 00:11:19.121 Number of Queues 00:11:19.121 ================ 00:11:19.121 Number of I/O Submission Queues: 64 00:11:19.121 Number of I/O Completion Queues: 64 00:11:19.121 00:11:19.121 ZNS Specific Controller Data 00:11:19.121 ============================ 00:11:19.121 Zone Append Size Limit: 0 00:11:19.121 00:11:19.121 00:11:19.121 Active Namespaces 00:11:19.121 ================= 00:11:19.121 Namespace ID:1 00:11:19.121 Error Recovery Timeout: Unlimited 00:11:19.121 Command Set Identifier: NVM (00h) 00:11:19.121 Deallocate: Supported 00:11:19.121 Deallocated/Unwritten Error: Supported 00:11:19.121 Deallocated Read Value: All 0x00 00:11:19.121 Deallocate in Write Zeroes: Not Supported 00:11:19.121 Deallocated Guard Field: 0xFFFF 00:11:19.121 Flush: Supported 00:11:19.121 Reservation: Not Supported 00:11:19.121 Namespace Sharing Capabilities: Multiple Controllers 00:11:19.121 Size (in LBAs): 262144 (1GiB) 00:11:19.121 Capacity (in LBAs): 262144 (1GiB) 00:11:19.121 Utilization (in LBAs): 262144 (1GiB) 00:11:19.121 Thin Provisioning: Not Supported 00:11:19.121 Per-NS Atomic Units: No 00:11:19.121 Maximum Single Source Range Length: 128 00:11:19.121 Maximum Copy Length: 128 00:11:19.121 Maximum Source Range Count: 128 00:11:19.121 NGUID/EUI64 Never Reused: No 00:11:19.121 Namespace Write Protected: No 00:11:19.121 Endurance group ID: 1 00:11:19.121 Number of LBA Formats: 8 00:11:19.121 Current LBA Format: LBA Format #04 00:11:19.121 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:19.121 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:19.121 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:19.121 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:19.121 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:19.121 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:19.121 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:19.121 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:19.121 00:11:19.121 Get Feature FDP: 00:11:19.121 ================ 00:11:19.121 Enabled: Yes 00:11:19.121 FDP configuration index: 0 00:11:19.121 00:11:19.121 FDP configurations log page 00:11:19.121 =========================== 00:11:19.121 Number of FDP configurations: 1 00:11:19.121 Version: 0 00:11:19.121 Size: 112 00:11:19.121 FDP Configuration Descriptor: 0 00:11:19.121 Descriptor Size: 96 00:11:19.121 Reclaim Group Identifier format: 2 00:11:19.121 FDP Volatile Write Cache: Not Present 00:11:19.121 FDP Configuration: Valid 00:11:19.121 Vendor Specific Size: 0 00:11:19.121 Number of Reclaim Groups: 2 00:11:19.121 Number of Recalim Unit Handles: 8 00:11:19.121 Max Placement Identifiers: 128 00:11:19.121 Number of Namespaces Suppprted: 256 00:11:19.121 Reclaim unit Nominal Size: 6000000 bytes 00:11:19.121 Estimated Reclaim Unit Time Limit: Not Reported 00:11:19.121 RUH Desc #000: RUH Type: Initially Isolated 00:11:19.121 RUH Desc #001: RUH Type: Initially Isolated 00:11:19.121 RUH Desc #002: RUH Type: Initially Isolated 00:11:19.121 RUH Desc #003: RUH Type: Initially Isolated 00:11:19.121 RUH Desc #004: RUH Type: Initially Isolated 00:11:19.121 RUH Desc #005: RUH Type: Initially Isolated 00:11:19.121 RUH Desc #006: RUH Type: Initially Isolated 00:11:19.121 RUH Desc #007: RUH Type: Initially Isolated 00:11:19.121 00:11:19.121 FDP reclaim unit handle usage log page 00:11:19.121 ====================================== 00:11:19.121 Number of Reclaim Unit Handles: 8 00:11:19.121 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:19.121 RUH Usage Desc #001: RUH Attributes: Unused 00:11:19.121 RUH Usage Desc #002: RUH Attributes: Unused 00:11:19.121 RUH Usage Desc #003: RUH Attributes: Unused 00:11:19.121 RUH Usage Desc #004: RUH Attributes: Unused 00:11:19.121 RUH Usage Desc #005: RUH Attributes: Unused 00:11:19.121 RUH Usage Desc #006: RUH Attributes: Unused 00:11:19.121 RUH Usage Desc #007: RUH Attributes: Unused 00:11:19.121 00:11:19.121 FDP statistics log page 00:11:19.121 ======================= 00:11:19.121 Host bytes with metadata written: 427859968 00:11:19.121 Media[2024-11-19 08:29:58.165083] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64513 terminated unexpected 00:11:19.121 bytes with metadata written: 427925504 00:11:19.121 Media bytes erased: 0 00:11:19.121 00:11:19.121 FDP events log page 00:11:19.121 =================== 00:11:19.121 Number of FDP events: 0 00:11:19.122 00:11:19.122 NVM Specific Namespace Data 00:11:19.122 =========================== 00:11:19.122 Logical Block Storage Tag Mask: 0 00:11:19.122 Protection Information Capabilities: 00:11:19.122 16b Guard Protection Information Storage Tag Support: No 00:11:19.122 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:19.122 Storage Tag Check Read Support: No 00:11:19.122 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.122 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.122 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.122 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.122 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.122 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.122 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.122 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.122 ===================================================== 00:11:19.122 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:19.122 ===================================================== 00:11:19.122 Controller Capabilities/Features 00:11:19.122 ================================ 00:11:19.122 Vendor ID: 1b36 00:11:19.122 Subsystem Vendor ID: 1af4 00:11:19.122 Serial Number: 12342 00:11:19.122 Model Number: QEMU NVMe Ctrl 00:11:19.122 Firmware Version: 8.0.0 00:11:19.122 Recommended Arb Burst: 6 00:11:19.122 IEEE OUI Identifier: 00 54 52 00:11:19.122 Multi-path I/O 00:11:19.122 May have multiple subsystem ports: No 00:11:19.122 May have multiple controllers: No 00:11:19.122 Associated with SR-IOV VF: No 00:11:19.122 Max Data Transfer Size: 524288 00:11:19.122 Max Number of Namespaces: 256 00:11:19.122 Max Number of I/O Queues: 64 00:11:19.122 NVMe Specification Version (VS): 1.4 00:11:19.122 NVMe Specification Version (Identify): 1.4 00:11:19.122 Maximum Queue Entries: 2048 00:11:19.122 Contiguous Queues Required: Yes 00:11:19.122 Arbitration Mechanisms Supported 00:11:19.122 Weighted Round Robin: Not Supported 00:11:19.122 Vendor Specific: Not Supported 00:11:19.122 Reset Timeout: 7500 ms 00:11:19.122 Doorbell Stride: 4 bytes 00:11:19.122 NVM Subsystem Reset: Not Supported 00:11:19.122 Command Sets Supported 00:11:19.122 NVM Command Set: Supported 00:11:19.122 Boot Partition: Not Supported 00:11:19.122 Memory Page Size Minimum: 4096 bytes 00:11:19.122 Memory Page Size Maximum: 65536 bytes 00:11:19.122 Persistent Memory Region: Not Supported 00:11:19.122 Optional Asynchronous Events Supported 00:11:19.122 Namespace Attribute Notices: Supported 00:11:19.122 Firmware Activation Notices: Not Supported 00:11:19.122 ANA Change Notices: Not Supported 00:11:19.122 PLE Aggregate Log Change Notices: Not Supported 00:11:19.122 LBA Status Info Alert Notices: Not Supported 00:11:19.122 EGE Aggregate Log Change Notices: Not Supported 00:11:19.122 Normal NVM Subsystem Shutdown event: Not Supported 00:11:19.122 Zone Descriptor Change Notices: Not Supported 00:11:19.122 Discovery Log Change Notices: Not Supported 00:11:19.122 Controller Attributes 00:11:19.122 128-bit Host Identifier: Not Supported 00:11:19.122 Non-Operational Permissive Mode: Not Supported 00:11:19.122 NVM Sets: Not Supported 00:11:19.122 Read Recovery Levels: Not Supported 00:11:19.122 Endurance Groups: Not Supported 00:11:19.122 Predictable Latency Mode: Not Supported 00:11:19.122 Traffic Based Keep ALive: Not Supported 00:11:19.122 Namespace Granularity: Not Supported 00:11:19.122 SQ Associations: Not Supported 00:11:19.122 UUID List: Not Supported 00:11:19.122 Multi-Domain Subsystem: Not Supported 00:11:19.122 Fixed Capacity Management: Not Supported 00:11:19.122 Variable Capacity Management: Not Supported 00:11:19.122 Delete Endurance Group: Not Supported 00:11:19.122 Delete NVM Set: Not Supported 00:11:19.122 Extended LBA Formats Supported: Supported 00:11:19.122 Flexible Data Placement Supported: Not Supported 00:11:19.122 00:11:19.122 Controller Memory Buffer Support 00:11:19.122 ================================ 00:11:19.122 Supported: No 00:11:19.122 00:11:19.122 Persistent Memory Region Support 00:11:19.122 ================================ 00:11:19.122 Supported: No 00:11:19.122 00:11:19.122 Admin Command Set Attributes 00:11:19.122 ============================ 00:11:19.122 Security Send/Receive: Not Supported 00:11:19.122 Format NVM: Supported 00:11:19.122 Firmware Activate/Download: Not Supported 00:11:19.122 Namespace Management: Supported 00:11:19.122 Device Self-Test: Not Supported 00:11:19.122 Directives: Supported 00:11:19.122 NVMe-MI: Not Supported 00:11:19.122 Virtualization Management: Not Supported 00:11:19.122 Doorbell Buffer Config: Supported 00:11:19.122 Get LBA Status Capability: Not Supported 00:11:19.122 Command & Feature Lockdown Capability: Not Supported 00:11:19.122 Abort Command Limit: 4 00:11:19.122 Async Event Request Limit: 4 00:11:19.122 Number of Firmware Slots: N/A 00:11:19.122 Firmware Slot 1 Read-Only: N/A 00:11:19.122 Firmware Activation Without Reset: N/A 00:11:19.122 Multiple Update Detection Support: N/A 00:11:19.122 Firmware Update Granularity: No Information Provided 00:11:19.122 Per-Namespace SMART Log: Yes 00:11:19.122 Asymmetric Namespace Access Log Page: Not Supported 00:11:19.122 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:19.122 Command Effects Log Page: Supported 00:11:19.122 Get Log Page Extended Data: Supported 00:11:19.122 Telemetry Log Pages: Not Supported 00:11:19.122 Persistent Event Log Pages: Not Supported 00:11:19.122 Supported Log Pages Log Page: May Support 00:11:19.122 Commands Supported & Effects Log Page: Not Supported 00:11:19.122 Feature Identifiers & Effects Log Page:May Support 00:11:19.122 NVMe-MI Commands & Effects Log Page: May Support 00:11:19.122 Data Area 4 for Telemetry Log: Not Supported 00:11:19.122 Error Log Page Entries Supported: 1 00:11:19.122 Keep Alive: Not Supported 00:11:19.122 00:11:19.122 NVM Command Set Attributes 00:11:19.122 ========================== 00:11:19.122 Submission Queue Entry Size 00:11:19.122 Max: 64 00:11:19.122 Min: 64 00:11:19.122 Completion Queue Entry Size 00:11:19.122 Max: 16 00:11:19.122 Min: 16 00:11:19.122 Number of Namespaces: 256 00:11:19.122 Compare Command: Supported 00:11:19.122 Write Uncorrectable Command: Not Supported 00:11:19.122 Dataset Management Command: Supported 00:11:19.122 Write Zeroes Command: Supported 00:11:19.122 Set Features Save Field: Supported 00:11:19.122 Reservations: Not Supported 00:11:19.122 Timestamp: Supported 00:11:19.122 Copy: Supported 00:11:19.122 Volatile Write Cache: Present 00:11:19.122 Atomic Write Unit (Normal): 1 00:11:19.122 Atomic Write Unit (PFail): 1 00:11:19.122 Atomic Compare & Write Unit: 1 00:11:19.122 Fused Compare & Write: Not Supported 00:11:19.122 Scatter-Gather List 00:11:19.122 SGL Command Set: Supported 00:11:19.122 SGL Keyed: Not Supported 00:11:19.122 SGL Bit Bucket Descriptor: Not Supported 00:11:19.122 SGL Metadata Pointer: Not Supported 00:11:19.122 Oversized SGL: Not Supported 00:11:19.122 SGL Metadata Address: Not Supported 00:11:19.122 SGL Offset: Not Supported 00:11:19.122 Transport SGL Data Block: Not Supported 00:11:19.122 Replay Protected Memory Block: Not Supported 00:11:19.122 00:11:19.122 Firmware Slot Information 00:11:19.122 ========================= 00:11:19.123 Active slot: 1 00:11:19.123 Slot 1 Firmware Revision: 1.0 00:11:19.123 00:11:19.123 00:11:19.123 Commands Supported and Effects 00:11:19.123 ============================== 00:11:19.123 Admin Commands 00:11:19.123 -------------- 00:11:19.123 Delete I/O Submission Queue (00h): Supported 00:11:19.123 Create I/O Submission Queue (01h): Supported 00:11:19.123 Get Log Page (02h): Supported 00:11:19.123 Delete I/O Completion Queue (04h): Supported 00:11:19.123 Create I/O Completion Queue (05h): Supported 00:11:19.123 Identify (06h): Supported 00:11:19.123 Abort (08h): Supported 00:11:19.123 Set Features (09h): Supported 00:11:19.123 Get Features (0Ah): Supported 00:11:19.123 Asynchronous Event Request (0Ch): Supported 00:11:19.123 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:19.123 Directive Send (19h): Supported 00:11:19.123 Directive Receive (1Ah): Supported 00:11:19.123 Virtualization Management (1Ch): Supported 00:11:19.123 Doorbell Buffer Config (7Ch): Supported 00:11:19.123 Format NVM (80h): Supported LBA-Change 00:11:19.123 I/O Commands 00:11:19.123 ------------ 00:11:19.123 Flush (00h): Supported LBA-Change 00:11:19.123 Write (01h): Supported LBA-Change 00:11:19.123 Read (02h): Supported 00:11:19.123 Compare (05h): Supported 00:11:19.123 Write Zeroes (08h): Supported LBA-Change 00:11:19.123 Dataset Management (09h): Supported LBA-Change 00:11:19.123 Unknown (0Ch): Supported 00:11:19.123 Unknown (12h): Supported 00:11:19.123 Copy (19h): Supported LBA-Change 00:11:19.123 Unknown (1Dh): Supported LBA-Change 00:11:19.123 00:11:19.123 Error Log 00:11:19.123 ========= 00:11:19.123 00:11:19.123 Arbitration 00:11:19.123 =========== 00:11:19.123 Arbitration Burst: no limit 00:11:19.123 00:11:19.123 Power Management 00:11:19.123 ================ 00:11:19.123 Number of Power States: 1 00:11:19.123 Current Power State: Power State #0 00:11:19.123 Power State #0: 00:11:19.123 Max Power: 25.00 W 00:11:19.123 Non-Operational State: Operational 00:11:19.123 Entry Latency: 16 microseconds 00:11:19.123 Exit Latency: 4 microseconds 00:11:19.123 Relative Read Throughput: 0 00:11:19.123 Relative Read Latency: 0 00:11:19.123 Relative Write Throughput: 0 00:11:19.123 Relative Write Latency: 0 00:11:19.123 Idle Power: Not Reported 00:11:19.123 Active Power: Not Reported 00:11:19.123 Non-Operational Permissive Mode: Not Supported 00:11:19.123 00:11:19.123 Health Information 00:11:19.123 ================== 00:11:19.123 Critical Warnings: 00:11:19.123 Available Spare Space: OK 00:11:19.123 Temperature: OK 00:11:19.123 Device Reliability: OK 00:11:19.123 Read Only: No 00:11:19.123 Volatile Memory Backup: OK 00:11:19.123 Current Temperature: 323 Kelvin (50 Celsius) 00:11:19.123 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:19.123 Available Spare: 0% 00:11:19.123 Available Spare Threshold: 0% 00:11:19.123 Life Percentage Used: 0% 00:11:19.123 Data Units Read: 2045 00:11:19.123 Data Units Written: 1832 00:11:19.123 Host Read Commands: 101053 00:11:19.123 Host Write Commands: 99322 00:11:19.123 Controller Busy Time: 0 minutes 00:11:19.123 Power Cycles: 0 00:11:19.123 Power On Hours: 0 hours 00:11:19.123 Unsafe Shutdowns: 0 00:11:19.123 Unrecoverable Media Errors: 0 00:11:19.123 Lifetime Error Log Entries: 0 00:11:19.123 Warning Temperature Time: 0 minutes 00:11:19.123 Critical Temperature Time: 0 minutes 00:11:19.123 00:11:19.123 Number of Queues 00:11:19.123 ================ 00:11:19.123 Number of I/O Submission Queues: 64 00:11:19.123 Number of I/O Completion Queues: 64 00:11:19.123 00:11:19.123 ZNS Specific Controller Data 00:11:19.123 ============================ 00:11:19.123 Zone Append Size Limit: 0 00:11:19.123 00:11:19.123 00:11:19.123 Active Namespaces 00:11:19.123 ================= 00:11:19.123 Namespace ID:1 00:11:19.123 Error Recovery Timeout: Unlimited 00:11:19.123 Command Set Identifier: NVM (00h) 00:11:19.123 Deallocate: Supported 00:11:19.123 Deallocated/Unwritten Error: Supported 00:11:19.123 Deallocated Read Value: All 0x00 00:11:19.123 Deallocate in Write Zeroes: Not Supported 00:11:19.123 Deallocated Guard Field: 0xFFFF 00:11:19.123 Flush: Supported 00:11:19.123 Reservation: Not Supported 00:11:19.123 Namespace Sharing Capabilities: Private 00:11:19.123 Size (in LBAs): 1048576 (4GiB) 00:11:19.123 Capacity (in LBAs): 1048576 (4GiB) 00:11:19.123 Utilization (in LBAs): 1048576 (4GiB) 00:11:19.123 Thin Provisioning: Not Supported 00:11:19.123 Per-NS Atomic Units: No 00:11:19.123 Maximum Single Source Range Length: 128 00:11:19.123 Maximum Copy Length: 128 00:11:19.123 Maximum Source Range Count: 128 00:11:19.123 NGUID/EUI64 Never Reused: No 00:11:19.123 Namespace Write Protected: No 00:11:19.123 Number of LBA Formats: 8 00:11:19.123 Current LBA Format: LBA Format #04 00:11:19.123 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:19.123 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:19.123 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:19.123 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:19.123 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:19.123 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:19.123 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:19.123 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:19.123 00:11:19.123 NVM Specific Namespace Data 00:11:19.123 =========================== 00:11:19.123 Logical Block Storage Tag Mask: 0 00:11:19.123 Protection Information Capabilities: 00:11:19.123 16b Guard Protection Information Storage Tag Support: No 00:11:19.123 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:19.123 Storage Tag Check Read Support: No 00:11:19.123 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.123 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.123 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.123 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.123 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.123 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.123 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.123 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.123 Namespace ID:2 00:11:19.123 Error Recovery Timeout: Unlimited 00:11:19.123 Command Set Identifier: NVM (00h) 00:11:19.123 Deallocate: Supported 00:11:19.123 Deallocated/Unwritten Error: Supported 00:11:19.123 Deallocated Read Value: All 0x00 00:11:19.123 Deallocate in Write Zeroes: Not Supported 00:11:19.123 Deallocated Guard Field: 0xFFFF 00:11:19.123 Flush: Supported 00:11:19.123 Reservation: Not Supported 00:11:19.123 Namespace Sharing Capabilities: Private 00:11:19.123 Size (in LBAs): 1048576 (4GiB) 00:11:19.123 Capacity (in LBAs): 1048576 (4GiB) 00:11:19.123 Utilization (in LBAs): 1048576 (4GiB) 00:11:19.123 Thin Provisioning: Not Supported 00:11:19.123 Per-NS Atomic Units: No 00:11:19.123 Maximum Single Source Range Length: 128 00:11:19.123 Maximum Copy Length: 128 00:11:19.123 Maximum Source Range Count: 128 00:11:19.123 NGUID/EUI64 Never Reused: No 00:11:19.123 Namespace Write Protected: No 00:11:19.123 Number of LBA Formats: 8 00:11:19.123 Current LBA Format: LBA Format #04 00:11:19.123 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:19.123 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:19.123 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:19.123 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:19.123 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:19.123 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:19.123 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:19.123 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:19.123 00:11:19.123 NVM Specific Namespace Data 00:11:19.123 =========================== 00:11:19.123 Logical Block Storage Tag Mask: 0 00:11:19.123 Protection Information Capabilities: 00:11:19.123 16b Guard Protection Information Storage Tag Support: No 00:11:19.123 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:19.123 Storage Tag Check Read Support: No 00:11:19.123 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.123 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.123 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.123 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.123 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.123 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.124 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.124 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.124 Namespace ID:3 00:11:19.124 Error Recovery Timeout: Unlimited 00:11:19.124 Command Set Identifier: NVM (00h) 00:11:19.124 Deallocate: Supported 00:11:19.124 Deallocated/Unwritten Error: Supported 00:11:19.124 Deallocated Read Value: All 0x00 00:11:19.124 Deallocate in Write Zeroes: Not Supported 00:11:19.124 Deallocated Guard Field: 0xFFFF 00:11:19.124 Flush: Supported 00:11:19.124 Reservation: Not Supported 00:11:19.124 Namespace Sharing Capabilities: Private 00:11:19.124 Size (in LBAs): 1048576 (4GiB) 00:11:19.124 Capacity (in LBAs): 1048576 (4GiB) 00:11:19.124 Utilization (in LBAs): 1048576 (4GiB) 00:11:19.124 Thin Provisioning: Not Supported 00:11:19.124 Per-NS Atomic Units: No 00:11:19.124 Maximum Single Source Range Length: 128 00:11:19.124 Maximum Copy Length: 128 00:11:19.124 Maximum Source Range Count: 128 00:11:19.124 NGUID/EUI64 Never Reused: No 00:11:19.124 Namespace Write Protected: No 00:11:19.124 Number of LBA Formats: 8 00:11:19.124 Current LBA Format: LBA Format #04 00:11:19.124 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:19.124 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:19.124 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:19.124 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:19.124 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:19.124 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:19.124 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:19.124 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:19.124 00:11:19.124 NVM Specific Namespace Data 00:11:19.124 =========================== 00:11:19.124 Logical Block Storage Tag Mask: 0 00:11:19.124 Protection Information Capabilities: 00:11:19.124 16b Guard Protection Information Storage Tag Support: No 00:11:19.124 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:19.124 Storage Tag Check Read Support: No 00:11:19.124 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.124 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.124 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.124 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.124 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.124 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.124 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.124 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.124 08:29:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:19.124 08:29:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:19.384 ===================================================== 00:11:19.384 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:19.384 ===================================================== 00:11:19.384 Controller Capabilities/Features 00:11:19.384 ================================ 00:11:19.384 Vendor ID: 1b36 00:11:19.384 Subsystem Vendor ID: 1af4 00:11:19.384 Serial Number: 12340 00:11:19.384 Model Number: QEMU NVMe Ctrl 00:11:19.384 Firmware Version: 8.0.0 00:11:19.384 Recommended Arb Burst: 6 00:11:19.384 IEEE OUI Identifier: 00 54 52 00:11:19.384 Multi-path I/O 00:11:19.385 May have multiple subsystem ports: No 00:11:19.385 May have multiple controllers: No 00:11:19.385 Associated with SR-IOV VF: No 00:11:19.385 Max Data Transfer Size: 524288 00:11:19.385 Max Number of Namespaces: 256 00:11:19.385 Max Number of I/O Queues: 64 00:11:19.385 NVMe Specification Version (VS): 1.4 00:11:19.385 NVMe Specification Version (Identify): 1.4 00:11:19.385 Maximum Queue Entries: 2048 00:11:19.385 Contiguous Queues Required: Yes 00:11:19.385 Arbitration Mechanisms Supported 00:11:19.385 Weighted Round Robin: Not Supported 00:11:19.385 Vendor Specific: Not Supported 00:11:19.385 Reset Timeout: 7500 ms 00:11:19.385 Doorbell Stride: 4 bytes 00:11:19.385 NVM Subsystem Reset: Not Supported 00:11:19.385 Command Sets Supported 00:11:19.385 NVM Command Set: Supported 00:11:19.385 Boot Partition: Not Supported 00:11:19.385 Memory Page Size Minimum: 4096 bytes 00:11:19.385 Memory Page Size Maximum: 65536 bytes 00:11:19.385 Persistent Memory Region: Not Supported 00:11:19.385 Optional Asynchronous Events Supported 00:11:19.385 Namespace Attribute Notices: Supported 00:11:19.385 Firmware Activation Notices: Not Supported 00:11:19.385 ANA Change Notices: Not Supported 00:11:19.385 PLE Aggregate Log Change Notices: Not Supported 00:11:19.385 LBA Status Info Alert Notices: Not Supported 00:11:19.385 EGE Aggregate Log Change Notices: Not Supported 00:11:19.385 Normal NVM Subsystem Shutdown event: Not Supported 00:11:19.385 Zone Descriptor Change Notices: Not Supported 00:11:19.385 Discovery Log Change Notices: Not Supported 00:11:19.385 Controller Attributes 00:11:19.385 128-bit Host Identifier: Not Supported 00:11:19.385 Non-Operational Permissive Mode: Not Supported 00:11:19.385 NVM Sets: Not Supported 00:11:19.385 Read Recovery Levels: Not Supported 00:11:19.385 Endurance Groups: Not Supported 00:11:19.385 Predictable Latency Mode: Not Supported 00:11:19.385 Traffic Based Keep ALive: Not Supported 00:11:19.385 Namespace Granularity: Not Supported 00:11:19.385 SQ Associations: Not Supported 00:11:19.385 UUID List: Not Supported 00:11:19.385 Multi-Domain Subsystem: Not Supported 00:11:19.385 Fixed Capacity Management: Not Supported 00:11:19.385 Variable Capacity Management: Not Supported 00:11:19.385 Delete Endurance Group: Not Supported 00:11:19.385 Delete NVM Set: Not Supported 00:11:19.385 Extended LBA Formats Supported: Supported 00:11:19.385 Flexible Data Placement Supported: Not Supported 00:11:19.385 00:11:19.385 Controller Memory Buffer Support 00:11:19.385 ================================ 00:11:19.385 Supported: No 00:11:19.385 00:11:19.385 Persistent Memory Region Support 00:11:19.385 ================================ 00:11:19.385 Supported: No 00:11:19.385 00:11:19.385 Admin Command Set Attributes 00:11:19.385 ============================ 00:11:19.385 Security Send/Receive: Not Supported 00:11:19.385 Format NVM: Supported 00:11:19.385 Firmware Activate/Download: Not Supported 00:11:19.385 Namespace Management: Supported 00:11:19.385 Device Self-Test: Not Supported 00:11:19.385 Directives: Supported 00:11:19.385 NVMe-MI: Not Supported 00:11:19.385 Virtualization Management: Not Supported 00:11:19.385 Doorbell Buffer Config: Supported 00:11:19.385 Get LBA Status Capability: Not Supported 00:11:19.385 Command & Feature Lockdown Capability: Not Supported 00:11:19.385 Abort Command Limit: 4 00:11:19.385 Async Event Request Limit: 4 00:11:19.385 Number of Firmware Slots: N/A 00:11:19.385 Firmware Slot 1 Read-Only: N/A 00:11:19.385 Firmware Activation Without Reset: N/A 00:11:19.385 Multiple Update Detection Support: N/A 00:11:19.385 Firmware Update Granularity: No Information Provided 00:11:19.385 Per-Namespace SMART Log: Yes 00:11:19.385 Asymmetric Namespace Access Log Page: Not Supported 00:11:19.385 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:19.385 Command Effects Log Page: Supported 00:11:19.385 Get Log Page Extended Data: Supported 00:11:19.385 Telemetry Log Pages: Not Supported 00:11:19.385 Persistent Event Log Pages: Not Supported 00:11:19.385 Supported Log Pages Log Page: May Support 00:11:19.385 Commands Supported & Effects Log Page: Not Supported 00:11:19.385 Feature Identifiers & Effects Log Page:May Support 00:11:19.385 NVMe-MI Commands & Effects Log Page: May Support 00:11:19.385 Data Area 4 for Telemetry Log: Not Supported 00:11:19.385 Error Log Page Entries Supported: 1 00:11:19.385 Keep Alive: Not Supported 00:11:19.385 00:11:19.385 NVM Command Set Attributes 00:11:19.385 ========================== 00:11:19.385 Submission Queue Entry Size 00:11:19.385 Max: 64 00:11:19.385 Min: 64 00:11:19.385 Completion Queue Entry Size 00:11:19.385 Max: 16 00:11:19.385 Min: 16 00:11:19.385 Number of Namespaces: 256 00:11:19.385 Compare Command: Supported 00:11:19.385 Write Uncorrectable Command: Not Supported 00:11:19.385 Dataset Management Command: Supported 00:11:19.385 Write Zeroes Command: Supported 00:11:19.385 Set Features Save Field: Supported 00:11:19.385 Reservations: Not Supported 00:11:19.385 Timestamp: Supported 00:11:19.385 Copy: Supported 00:11:19.385 Volatile Write Cache: Present 00:11:19.385 Atomic Write Unit (Normal): 1 00:11:19.385 Atomic Write Unit (PFail): 1 00:11:19.385 Atomic Compare & Write Unit: 1 00:11:19.385 Fused Compare & Write: Not Supported 00:11:19.385 Scatter-Gather List 00:11:19.385 SGL Command Set: Supported 00:11:19.385 SGL Keyed: Not Supported 00:11:19.385 SGL Bit Bucket Descriptor: Not Supported 00:11:19.385 SGL Metadata Pointer: Not Supported 00:11:19.385 Oversized SGL: Not Supported 00:11:19.385 SGL Metadata Address: Not Supported 00:11:19.385 SGL Offset: Not Supported 00:11:19.385 Transport SGL Data Block: Not Supported 00:11:19.385 Replay Protected Memory Block: Not Supported 00:11:19.385 00:11:19.385 Firmware Slot Information 00:11:19.385 ========================= 00:11:19.385 Active slot: 1 00:11:19.385 Slot 1 Firmware Revision: 1.0 00:11:19.385 00:11:19.385 00:11:19.385 Commands Supported and Effects 00:11:19.385 ============================== 00:11:19.385 Admin Commands 00:11:19.385 -------------- 00:11:19.385 Delete I/O Submission Queue (00h): Supported 00:11:19.385 Create I/O Submission Queue (01h): Supported 00:11:19.385 Get Log Page (02h): Supported 00:11:19.385 Delete I/O Completion Queue (04h): Supported 00:11:19.385 Create I/O Completion Queue (05h): Supported 00:11:19.385 Identify (06h): Supported 00:11:19.385 Abort (08h): Supported 00:11:19.385 Set Features (09h): Supported 00:11:19.385 Get Features (0Ah): Supported 00:11:19.385 Asynchronous Event Request (0Ch): Supported 00:11:19.385 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:19.385 Directive Send (19h): Supported 00:11:19.385 Directive Receive (1Ah): Supported 00:11:19.385 Virtualization Management (1Ch): Supported 00:11:19.385 Doorbell Buffer Config (7Ch): Supported 00:11:19.385 Format NVM (80h): Supported LBA-Change 00:11:19.385 I/O Commands 00:11:19.385 ------------ 00:11:19.385 Flush (00h): Supported LBA-Change 00:11:19.385 Write (01h): Supported LBA-Change 00:11:19.385 Read (02h): Supported 00:11:19.385 Compare (05h): Supported 00:11:19.385 Write Zeroes (08h): Supported LBA-Change 00:11:19.385 Dataset Management (09h): Supported LBA-Change 00:11:19.385 Unknown (0Ch): Supported 00:11:19.385 Unknown (12h): Supported 00:11:19.385 Copy (19h): Supported LBA-Change 00:11:19.385 Unknown (1Dh): Supported LBA-Change 00:11:19.385 00:11:19.385 Error Log 00:11:19.385 ========= 00:11:19.385 00:11:19.385 Arbitration 00:11:19.385 =========== 00:11:19.385 Arbitration Burst: no limit 00:11:19.385 00:11:19.385 Power Management 00:11:19.385 ================ 00:11:19.385 Number of Power States: 1 00:11:19.385 Current Power State: Power State #0 00:11:19.385 Power State #0: 00:11:19.385 Max Power: 25.00 W 00:11:19.385 Non-Operational State: Operational 00:11:19.385 Entry Latency: 16 microseconds 00:11:19.385 Exit Latency: 4 microseconds 00:11:19.385 Relative Read Throughput: 0 00:11:19.385 Relative Read Latency: 0 00:11:19.385 Relative Write Throughput: 0 00:11:19.385 Relative Write Latency: 0 00:11:19.385 Idle Power: Not Reported 00:11:19.385 Active Power: Not Reported 00:11:19.385 Non-Operational Permissive Mode: Not Supported 00:11:19.385 00:11:19.385 Health Information 00:11:19.385 ================== 00:11:19.386 Critical Warnings: 00:11:19.386 Available Spare Space: OK 00:11:19.386 Temperature: OK 00:11:19.386 Device Reliability: OK 00:11:19.386 Read Only: No 00:11:19.386 Volatile Memory Backup: OK 00:11:19.386 Current Temperature: 323 Kelvin (50 Celsius) 00:11:19.386 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:19.386 Available Spare: 0% 00:11:19.386 Available Spare Threshold: 0% 00:11:19.386 Life Percentage Used: 0% 00:11:19.386 Data Units Read: 648 00:11:19.386 Data Units Written: 576 00:11:19.386 Host Read Commands: 33160 00:11:19.386 Host Write Commands: 32946 00:11:19.386 Controller Busy Time: 0 minutes 00:11:19.386 Power Cycles: 0 00:11:19.386 Power On Hours: 0 hours 00:11:19.386 Unsafe Shutdowns: 0 00:11:19.386 Unrecoverable Media Errors: 0 00:11:19.386 Lifetime Error Log Entries: 0 00:11:19.386 Warning Temperature Time: 0 minutes 00:11:19.386 Critical Temperature Time: 0 minutes 00:11:19.386 00:11:19.386 Number of Queues 00:11:19.386 ================ 00:11:19.386 Number of I/O Submission Queues: 64 00:11:19.386 Number of I/O Completion Queues: 64 00:11:19.386 00:11:19.386 ZNS Specific Controller Data 00:11:19.386 ============================ 00:11:19.386 Zone Append Size Limit: 0 00:11:19.386 00:11:19.386 00:11:19.386 Active Namespaces 00:11:19.386 ================= 00:11:19.386 Namespace ID:1 00:11:19.386 Error Recovery Timeout: Unlimited 00:11:19.386 Command Set Identifier: NVM (00h) 00:11:19.386 Deallocate: Supported 00:11:19.386 Deallocated/Unwritten Error: Supported 00:11:19.386 Deallocated Read Value: All 0x00 00:11:19.386 Deallocate in Write Zeroes: Not Supported 00:11:19.386 Deallocated Guard Field: 0xFFFF 00:11:19.386 Flush: Supported 00:11:19.386 Reservation: Not Supported 00:11:19.386 Metadata Transferred as: Separate Metadata Buffer 00:11:19.386 Namespace Sharing Capabilities: Private 00:11:19.386 Size (in LBAs): 1548666 (5GiB) 00:11:19.386 Capacity (in LBAs): 1548666 (5GiB) 00:11:19.386 Utilization (in LBAs): 1548666 (5GiB) 00:11:19.386 Thin Provisioning: Not Supported 00:11:19.386 Per-NS Atomic Units: No 00:11:19.386 Maximum Single Source Range Length: 128 00:11:19.386 Maximum Copy Length: 128 00:11:19.386 Maximum Source Range Count: 128 00:11:19.386 NGUID/EUI64 Never Reused: No 00:11:19.386 Namespace Write Protected: No 00:11:19.386 Number of LBA Formats: 8 00:11:19.386 Current LBA Format: LBA Format #07 00:11:19.386 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:19.386 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:19.386 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:19.386 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:19.386 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:19.386 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:19.386 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:19.386 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:19.386 00:11:19.386 NVM Specific Namespace Data 00:11:19.386 =========================== 00:11:19.386 Logical Block Storage Tag Mask: 0 00:11:19.386 Protection Information Capabilities: 00:11:19.386 16b Guard Protection Information Storage Tag Support: No 00:11:19.386 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:19.386 Storage Tag Check Read Support: No 00:11:19.386 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.386 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.386 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.386 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.386 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.386 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.386 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.386 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.386 08:29:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:19.386 08:29:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:19.645 ===================================================== 00:11:19.645 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:19.645 ===================================================== 00:11:19.645 Controller Capabilities/Features 00:11:19.645 ================================ 00:11:19.645 Vendor ID: 1b36 00:11:19.645 Subsystem Vendor ID: 1af4 00:11:19.645 Serial Number: 12341 00:11:19.645 Model Number: QEMU NVMe Ctrl 00:11:19.645 Firmware Version: 8.0.0 00:11:19.645 Recommended Arb Burst: 6 00:11:19.645 IEEE OUI Identifier: 00 54 52 00:11:19.645 Multi-path I/O 00:11:19.645 May have multiple subsystem ports: No 00:11:19.645 May have multiple controllers: No 00:11:19.645 Associated with SR-IOV VF: No 00:11:19.645 Max Data Transfer Size: 524288 00:11:19.645 Max Number of Namespaces: 256 00:11:19.645 Max Number of I/O Queues: 64 00:11:19.645 NVMe Specification Version (VS): 1.4 00:11:19.645 NVMe Specification Version (Identify): 1.4 00:11:19.645 Maximum Queue Entries: 2048 00:11:19.645 Contiguous Queues Required: Yes 00:11:19.645 Arbitration Mechanisms Supported 00:11:19.645 Weighted Round Robin: Not Supported 00:11:19.645 Vendor Specific: Not Supported 00:11:19.645 Reset Timeout: 7500 ms 00:11:19.645 Doorbell Stride: 4 bytes 00:11:19.645 NVM Subsystem Reset: Not Supported 00:11:19.645 Command Sets Supported 00:11:19.645 NVM Command Set: Supported 00:11:19.645 Boot Partition: Not Supported 00:11:19.645 Memory Page Size Minimum: 4096 bytes 00:11:19.645 Memory Page Size Maximum: 65536 bytes 00:11:19.645 Persistent Memory Region: Not Supported 00:11:19.645 Optional Asynchronous Events Supported 00:11:19.645 Namespace Attribute Notices: Supported 00:11:19.645 Firmware Activation Notices: Not Supported 00:11:19.645 ANA Change Notices: Not Supported 00:11:19.645 PLE Aggregate Log Change Notices: Not Supported 00:11:19.645 LBA Status Info Alert Notices: Not Supported 00:11:19.645 EGE Aggregate Log Change Notices: Not Supported 00:11:19.645 Normal NVM Subsystem Shutdown event: Not Supported 00:11:19.645 Zone Descriptor Change Notices: Not Supported 00:11:19.645 Discovery Log Change Notices: Not Supported 00:11:19.645 Controller Attributes 00:11:19.645 128-bit Host Identifier: Not Supported 00:11:19.645 Non-Operational Permissive Mode: Not Supported 00:11:19.645 NVM Sets: Not Supported 00:11:19.645 Read Recovery Levels: Not Supported 00:11:19.645 Endurance Groups: Not Supported 00:11:19.645 Predictable Latency Mode: Not Supported 00:11:19.645 Traffic Based Keep ALive: Not Supported 00:11:19.645 Namespace Granularity: Not Supported 00:11:19.645 SQ Associations: Not Supported 00:11:19.645 UUID List: Not Supported 00:11:19.645 Multi-Domain Subsystem: Not Supported 00:11:19.645 Fixed Capacity Management: Not Supported 00:11:19.645 Variable Capacity Management: Not Supported 00:11:19.645 Delete Endurance Group: Not Supported 00:11:19.645 Delete NVM Set: Not Supported 00:11:19.645 Extended LBA Formats Supported: Supported 00:11:19.645 Flexible Data Placement Supported: Not Supported 00:11:19.645 00:11:19.645 Controller Memory Buffer Support 00:11:19.645 ================================ 00:11:19.645 Supported: No 00:11:19.645 00:11:19.645 Persistent Memory Region Support 00:11:19.645 ================================ 00:11:19.645 Supported: No 00:11:19.645 00:11:19.645 Admin Command Set Attributes 00:11:19.645 ============================ 00:11:19.645 Security Send/Receive: Not Supported 00:11:19.645 Format NVM: Supported 00:11:19.645 Firmware Activate/Download: Not Supported 00:11:19.645 Namespace Management: Supported 00:11:19.645 Device Self-Test: Not Supported 00:11:19.646 Directives: Supported 00:11:19.646 NVMe-MI: Not Supported 00:11:19.646 Virtualization Management: Not Supported 00:11:19.646 Doorbell Buffer Config: Supported 00:11:19.646 Get LBA Status Capability: Not Supported 00:11:19.646 Command & Feature Lockdown Capability: Not Supported 00:11:19.646 Abort Command Limit: 4 00:11:19.646 Async Event Request Limit: 4 00:11:19.646 Number of Firmware Slots: N/A 00:11:19.646 Firmware Slot 1 Read-Only: N/A 00:11:19.646 Firmware Activation Without Reset: N/A 00:11:19.646 Multiple Update Detection Support: N/A 00:11:19.646 Firmware Update Granularity: No Information Provided 00:11:19.646 Per-Namespace SMART Log: Yes 00:11:19.646 Asymmetric Namespace Access Log Page: Not Supported 00:11:19.646 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:19.646 Command Effects Log Page: Supported 00:11:19.646 Get Log Page Extended Data: Supported 00:11:19.646 Telemetry Log Pages: Not Supported 00:11:19.646 Persistent Event Log Pages: Not Supported 00:11:19.646 Supported Log Pages Log Page: May Support 00:11:19.646 Commands Supported & Effects Log Page: Not Supported 00:11:19.646 Feature Identifiers & Effects Log Page:May Support 00:11:19.646 NVMe-MI Commands & Effects Log Page: May Support 00:11:19.646 Data Area 4 for Telemetry Log: Not Supported 00:11:19.646 Error Log Page Entries Supported: 1 00:11:19.646 Keep Alive: Not Supported 00:11:19.646 00:11:19.646 NVM Command Set Attributes 00:11:19.646 ========================== 00:11:19.646 Submission Queue Entry Size 00:11:19.646 Max: 64 00:11:19.646 Min: 64 00:11:19.646 Completion Queue Entry Size 00:11:19.646 Max: 16 00:11:19.646 Min: 16 00:11:19.646 Number of Namespaces: 256 00:11:19.646 Compare Command: Supported 00:11:19.646 Write Uncorrectable Command: Not Supported 00:11:19.646 Dataset Management Command: Supported 00:11:19.646 Write Zeroes Command: Supported 00:11:19.646 Set Features Save Field: Supported 00:11:19.646 Reservations: Not Supported 00:11:19.646 Timestamp: Supported 00:11:19.646 Copy: Supported 00:11:19.646 Volatile Write Cache: Present 00:11:19.646 Atomic Write Unit (Normal): 1 00:11:19.646 Atomic Write Unit (PFail): 1 00:11:19.646 Atomic Compare & Write Unit: 1 00:11:19.646 Fused Compare & Write: Not Supported 00:11:19.646 Scatter-Gather List 00:11:19.646 SGL Command Set: Supported 00:11:19.646 SGL Keyed: Not Supported 00:11:19.646 SGL Bit Bucket Descriptor: Not Supported 00:11:19.646 SGL Metadata Pointer: Not Supported 00:11:19.646 Oversized SGL: Not Supported 00:11:19.646 SGL Metadata Address: Not Supported 00:11:19.646 SGL Offset: Not Supported 00:11:19.646 Transport SGL Data Block: Not Supported 00:11:19.646 Replay Protected Memory Block: Not Supported 00:11:19.646 00:11:19.646 Firmware Slot Information 00:11:19.646 ========================= 00:11:19.646 Active slot: 1 00:11:19.646 Slot 1 Firmware Revision: 1.0 00:11:19.646 00:11:19.646 00:11:19.646 Commands Supported and Effects 00:11:19.646 ============================== 00:11:19.646 Admin Commands 00:11:19.646 -------------- 00:11:19.646 Delete I/O Submission Queue (00h): Supported 00:11:19.646 Create I/O Submission Queue (01h): Supported 00:11:19.646 Get Log Page (02h): Supported 00:11:19.646 Delete I/O Completion Queue (04h): Supported 00:11:19.646 Create I/O Completion Queue (05h): Supported 00:11:19.646 Identify (06h): Supported 00:11:19.646 Abort (08h): Supported 00:11:19.646 Set Features (09h): Supported 00:11:19.646 Get Features (0Ah): Supported 00:11:19.646 Asynchronous Event Request (0Ch): Supported 00:11:19.646 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:19.646 Directive Send (19h): Supported 00:11:19.646 Directive Receive (1Ah): Supported 00:11:19.646 Virtualization Management (1Ch): Supported 00:11:19.646 Doorbell Buffer Config (7Ch): Supported 00:11:19.646 Format NVM (80h): Supported LBA-Change 00:11:19.646 I/O Commands 00:11:19.646 ------------ 00:11:19.646 Flush (00h): Supported LBA-Change 00:11:19.646 Write (01h): Supported LBA-Change 00:11:19.646 Read (02h): Supported 00:11:19.646 Compare (05h): Supported 00:11:19.646 Write Zeroes (08h): Supported LBA-Change 00:11:19.646 Dataset Management (09h): Supported LBA-Change 00:11:19.646 Unknown (0Ch): Supported 00:11:19.646 Unknown (12h): Supported 00:11:19.646 Copy (19h): Supported LBA-Change 00:11:19.646 Unknown (1Dh): Supported LBA-Change 00:11:19.646 00:11:19.646 Error Log 00:11:19.646 ========= 00:11:19.646 00:11:19.646 Arbitration 00:11:19.646 =========== 00:11:19.646 Arbitration Burst: no limit 00:11:19.646 00:11:19.646 Power Management 00:11:19.646 ================ 00:11:19.646 Number of Power States: 1 00:11:19.646 Current Power State: Power State #0 00:11:19.646 Power State #0: 00:11:19.646 Max Power: 25.00 W 00:11:19.646 Non-Operational State: Operational 00:11:19.646 Entry Latency: 16 microseconds 00:11:19.646 Exit Latency: 4 microseconds 00:11:19.646 Relative Read Throughput: 0 00:11:19.646 Relative Read Latency: 0 00:11:19.646 Relative Write Throughput: 0 00:11:19.646 Relative Write Latency: 0 00:11:19.646 Idle Power: Not Reported 00:11:19.646 Active Power: Not Reported 00:11:19.646 Non-Operational Permissive Mode: Not Supported 00:11:19.646 00:11:19.646 Health Information 00:11:19.646 ================== 00:11:19.646 Critical Warnings: 00:11:19.646 Available Spare Space: OK 00:11:19.646 Temperature: OK 00:11:19.646 Device Reliability: OK 00:11:19.646 Read Only: No 00:11:19.646 Volatile Memory Backup: OK 00:11:19.646 Current Temperature: 323 Kelvin (50 Celsius) 00:11:19.646 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:19.646 Available Spare: 0% 00:11:19.646 Available Spare Threshold: 0% 00:11:19.646 Life Percentage Used: 0% 00:11:19.646 Data Units Read: 950 00:11:19.646 Data Units Written: 817 00:11:19.646 Host Read Commands: 48903 00:11:19.646 Host Write Commands: 47669 00:11:19.646 Controller Busy Time: 0 minutes 00:11:19.646 Power Cycles: 0 00:11:19.646 Power On Hours: 0 hours 00:11:19.646 Unsafe Shutdowns: 0 00:11:19.646 Unrecoverable Media Errors: 0 00:11:19.646 Lifetime Error Log Entries: 0 00:11:19.646 Warning Temperature Time: 0 minutes 00:11:19.646 Critical Temperature Time: 0 minutes 00:11:19.646 00:11:19.646 Number of Queues 00:11:19.646 ================ 00:11:19.646 Number of I/O Submission Queues: 64 00:11:19.646 Number of I/O Completion Queues: 64 00:11:19.646 00:11:19.646 ZNS Specific Controller Data 00:11:19.646 ============================ 00:11:19.646 Zone Append Size Limit: 0 00:11:19.646 00:11:19.646 00:11:19.646 Active Namespaces 00:11:19.646 ================= 00:11:19.646 Namespace ID:1 00:11:19.646 Error Recovery Timeout: Unlimited 00:11:19.646 Command Set Identifier: NVM (00h) 00:11:19.646 Deallocate: Supported 00:11:19.646 Deallocated/Unwritten Error: Supported 00:11:19.646 Deallocated Read Value: All 0x00 00:11:19.646 Deallocate in Write Zeroes: Not Supported 00:11:19.646 Deallocated Guard Field: 0xFFFF 00:11:19.646 Flush: Supported 00:11:19.646 Reservation: Not Supported 00:11:19.646 Namespace Sharing Capabilities: Private 00:11:19.646 Size (in LBAs): 1310720 (5GiB) 00:11:19.646 Capacity (in LBAs): 1310720 (5GiB) 00:11:19.646 Utilization (in LBAs): 1310720 (5GiB) 00:11:19.646 Thin Provisioning: Not Supported 00:11:19.646 Per-NS Atomic Units: No 00:11:19.646 Maximum Single Source Range Length: 128 00:11:19.646 Maximum Copy Length: 128 00:11:19.646 Maximum Source Range Count: 128 00:11:19.646 NGUID/EUI64 Never Reused: No 00:11:19.646 Namespace Write Protected: No 00:11:19.646 Number of LBA Formats: 8 00:11:19.646 Current LBA Format: LBA Format #04 00:11:19.646 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:19.646 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:19.646 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:19.646 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:19.646 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:19.646 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:19.646 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:19.646 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:19.646 00:11:19.646 NVM Specific Namespace Data 00:11:19.646 =========================== 00:11:19.646 Logical Block Storage Tag Mask: 0 00:11:19.646 Protection Information Capabilities: 00:11:19.646 16b Guard Protection Information Storage Tag Support: No 00:11:19.646 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:19.646 Storage Tag Check Read Support: No 00:11:19.646 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.646 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.646 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.647 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.647 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.647 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.647 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.647 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:19.905 08:29:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:19.905 08:29:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:20.166 ===================================================== 00:11:20.166 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:20.166 ===================================================== 00:11:20.166 Controller Capabilities/Features 00:11:20.166 ================================ 00:11:20.166 Vendor ID: 1b36 00:11:20.166 Subsystem Vendor ID: 1af4 00:11:20.166 Serial Number: 12342 00:11:20.166 Model Number: QEMU NVMe Ctrl 00:11:20.166 Firmware Version: 8.0.0 00:11:20.166 Recommended Arb Burst: 6 00:11:20.166 IEEE OUI Identifier: 00 54 52 00:11:20.166 Multi-path I/O 00:11:20.166 May have multiple subsystem ports: No 00:11:20.166 May have multiple controllers: No 00:11:20.166 Associated with SR-IOV VF: No 00:11:20.166 Max Data Transfer Size: 524288 00:11:20.166 Max Number of Namespaces: 256 00:11:20.166 Max Number of I/O Queues: 64 00:11:20.166 NVMe Specification Version (VS): 1.4 00:11:20.166 NVMe Specification Version (Identify): 1.4 00:11:20.166 Maximum Queue Entries: 2048 00:11:20.166 Contiguous Queues Required: Yes 00:11:20.166 Arbitration Mechanisms Supported 00:11:20.166 Weighted Round Robin: Not Supported 00:11:20.166 Vendor Specific: Not Supported 00:11:20.166 Reset Timeout: 7500 ms 00:11:20.166 Doorbell Stride: 4 bytes 00:11:20.166 NVM Subsystem Reset: Not Supported 00:11:20.166 Command Sets Supported 00:11:20.166 NVM Command Set: Supported 00:11:20.166 Boot Partition: Not Supported 00:11:20.166 Memory Page Size Minimum: 4096 bytes 00:11:20.166 Memory Page Size Maximum: 65536 bytes 00:11:20.166 Persistent Memory Region: Not Supported 00:11:20.166 Optional Asynchronous Events Supported 00:11:20.166 Namespace Attribute Notices: Supported 00:11:20.166 Firmware Activation Notices: Not Supported 00:11:20.166 ANA Change Notices: Not Supported 00:11:20.166 PLE Aggregate Log Change Notices: Not Supported 00:11:20.166 LBA Status Info Alert Notices: Not Supported 00:11:20.166 EGE Aggregate Log Change Notices: Not Supported 00:11:20.166 Normal NVM Subsystem Shutdown event: Not Supported 00:11:20.166 Zone Descriptor Change Notices: Not Supported 00:11:20.166 Discovery Log Change Notices: Not Supported 00:11:20.166 Controller Attributes 00:11:20.166 128-bit Host Identifier: Not Supported 00:11:20.166 Non-Operational Permissive Mode: Not Supported 00:11:20.166 NVM Sets: Not Supported 00:11:20.166 Read Recovery Levels: Not Supported 00:11:20.166 Endurance Groups: Not Supported 00:11:20.166 Predictable Latency Mode: Not Supported 00:11:20.166 Traffic Based Keep ALive: Not Supported 00:11:20.166 Namespace Granularity: Not Supported 00:11:20.166 SQ Associations: Not Supported 00:11:20.166 UUID List: Not Supported 00:11:20.166 Multi-Domain Subsystem: Not Supported 00:11:20.166 Fixed Capacity Management: Not Supported 00:11:20.166 Variable Capacity Management: Not Supported 00:11:20.166 Delete Endurance Group: Not Supported 00:11:20.166 Delete NVM Set: Not Supported 00:11:20.166 Extended LBA Formats Supported: Supported 00:11:20.166 Flexible Data Placement Supported: Not Supported 00:11:20.166 00:11:20.166 Controller Memory Buffer Support 00:11:20.166 ================================ 00:11:20.166 Supported: No 00:11:20.166 00:11:20.166 Persistent Memory Region Support 00:11:20.166 ================================ 00:11:20.166 Supported: No 00:11:20.166 00:11:20.166 Admin Command Set Attributes 00:11:20.166 ============================ 00:11:20.166 Security Send/Receive: Not Supported 00:11:20.166 Format NVM: Supported 00:11:20.166 Firmware Activate/Download: Not Supported 00:11:20.166 Namespace Management: Supported 00:11:20.166 Device Self-Test: Not Supported 00:11:20.166 Directives: Supported 00:11:20.166 NVMe-MI: Not Supported 00:11:20.166 Virtualization Management: Not Supported 00:11:20.166 Doorbell Buffer Config: Supported 00:11:20.166 Get LBA Status Capability: Not Supported 00:11:20.166 Command & Feature Lockdown Capability: Not Supported 00:11:20.166 Abort Command Limit: 4 00:11:20.166 Async Event Request Limit: 4 00:11:20.166 Number of Firmware Slots: N/A 00:11:20.166 Firmware Slot 1 Read-Only: N/A 00:11:20.166 Firmware Activation Without Reset: N/A 00:11:20.166 Multiple Update Detection Support: N/A 00:11:20.166 Firmware Update Granularity: No Information Provided 00:11:20.166 Per-Namespace SMART Log: Yes 00:11:20.166 Asymmetric Namespace Access Log Page: Not Supported 00:11:20.166 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:20.166 Command Effects Log Page: Supported 00:11:20.166 Get Log Page Extended Data: Supported 00:11:20.166 Telemetry Log Pages: Not Supported 00:11:20.166 Persistent Event Log Pages: Not Supported 00:11:20.166 Supported Log Pages Log Page: May Support 00:11:20.166 Commands Supported & Effects Log Page: Not Supported 00:11:20.166 Feature Identifiers & Effects Log Page:May Support 00:11:20.166 NVMe-MI Commands & Effects Log Page: May Support 00:11:20.166 Data Area 4 for Telemetry Log: Not Supported 00:11:20.166 Error Log Page Entries Supported: 1 00:11:20.166 Keep Alive: Not Supported 00:11:20.166 00:11:20.166 NVM Command Set Attributes 00:11:20.166 ========================== 00:11:20.166 Submission Queue Entry Size 00:11:20.166 Max: 64 00:11:20.166 Min: 64 00:11:20.166 Completion Queue Entry Size 00:11:20.166 Max: 16 00:11:20.166 Min: 16 00:11:20.166 Number of Namespaces: 256 00:11:20.166 Compare Command: Supported 00:11:20.166 Write Uncorrectable Command: Not Supported 00:11:20.166 Dataset Management Command: Supported 00:11:20.166 Write Zeroes Command: Supported 00:11:20.166 Set Features Save Field: Supported 00:11:20.166 Reservations: Not Supported 00:11:20.166 Timestamp: Supported 00:11:20.166 Copy: Supported 00:11:20.166 Volatile Write Cache: Present 00:11:20.166 Atomic Write Unit (Normal): 1 00:11:20.166 Atomic Write Unit (PFail): 1 00:11:20.166 Atomic Compare & Write Unit: 1 00:11:20.166 Fused Compare & Write: Not Supported 00:11:20.166 Scatter-Gather List 00:11:20.166 SGL Command Set: Supported 00:11:20.166 SGL Keyed: Not Supported 00:11:20.166 SGL Bit Bucket Descriptor: Not Supported 00:11:20.166 SGL Metadata Pointer: Not Supported 00:11:20.166 Oversized SGL: Not Supported 00:11:20.166 SGL Metadata Address: Not Supported 00:11:20.166 SGL Offset: Not Supported 00:11:20.166 Transport SGL Data Block: Not Supported 00:11:20.166 Replay Protected Memory Block: Not Supported 00:11:20.166 00:11:20.166 Firmware Slot Information 00:11:20.166 ========================= 00:11:20.166 Active slot: 1 00:11:20.166 Slot 1 Firmware Revision: 1.0 00:11:20.166 00:11:20.166 00:11:20.166 Commands Supported and Effects 00:11:20.166 ============================== 00:11:20.166 Admin Commands 00:11:20.166 -------------- 00:11:20.166 Delete I/O Submission Queue (00h): Supported 00:11:20.166 Create I/O Submission Queue (01h): Supported 00:11:20.166 Get Log Page (02h): Supported 00:11:20.166 Delete I/O Completion Queue (04h): Supported 00:11:20.166 Create I/O Completion Queue (05h): Supported 00:11:20.166 Identify (06h): Supported 00:11:20.166 Abort (08h): Supported 00:11:20.166 Set Features (09h): Supported 00:11:20.166 Get Features (0Ah): Supported 00:11:20.166 Asynchronous Event Request (0Ch): Supported 00:11:20.166 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:20.166 Directive Send (19h): Supported 00:11:20.167 Directive Receive (1Ah): Supported 00:11:20.167 Virtualization Management (1Ch): Supported 00:11:20.167 Doorbell Buffer Config (7Ch): Supported 00:11:20.167 Format NVM (80h): Supported LBA-Change 00:11:20.167 I/O Commands 00:11:20.167 ------------ 00:11:20.167 Flush (00h): Supported LBA-Change 00:11:20.167 Write (01h): Supported LBA-Change 00:11:20.167 Read (02h): Supported 00:11:20.167 Compare (05h): Supported 00:11:20.167 Write Zeroes (08h): Supported LBA-Change 00:11:20.167 Dataset Management (09h): Supported LBA-Change 00:11:20.167 Unknown (0Ch): Supported 00:11:20.167 Unknown (12h): Supported 00:11:20.167 Copy (19h): Supported LBA-Change 00:11:20.167 Unknown (1Dh): Supported LBA-Change 00:11:20.167 00:11:20.167 Error Log 00:11:20.167 ========= 00:11:20.167 00:11:20.167 Arbitration 00:11:20.167 =========== 00:11:20.167 Arbitration Burst: no limit 00:11:20.167 00:11:20.167 Power Management 00:11:20.167 ================ 00:11:20.167 Number of Power States: 1 00:11:20.167 Current Power State: Power State #0 00:11:20.167 Power State #0: 00:11:20.167 Max Power: 25.00 W 00:11:20.167 Non-Operational State: Operational 00:11:20.167 Entry Latency: 16 microseconds 00:11:20.167 Exit Latency: 4 microseconds 00:11:20.167 Relative Read Throughput: 0 00:11:20.167 Relative Read Latency: 0 00:11:20.167 Relative Write Throughput: 0 00:11:20.167 Relative Write Latency: 0 00:11:20.167 Idle Power: Not Reported 00:11:20.167 Active Power: Not Reported 00:11:20.167 Non-Operational Permissive Mode: Not Supported 00:11:20.167 00:11:20.167 Health Information 00:11:20.167 ================== 00:11:20.167 Critical Warnings: 00:11:20.167 Available Spare Space: OK 00:11:20.167 Temperature: OK 00:11:20.167 Device Reliability: OK 00:11:20.167 Read Only: No 00:11:20.167 Volatile Memory Backup: OK 00:11:20.167 Current Temperature: 323 Kelvin (50 Celsius) 00:11:20.167 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:20.167 Available Spare: 0% 00:11:20.167 Available Spare Threshold: 0% 00:11:20.167 Life Percentage Used: 0% 00:11:20.167 Data Units Read: 2045 00:11:20.167 Data Units Written: 1832 00:11:20.167 Host Read Commands: 101053 00:11:20.167 Host Write Commands: 99322 00:11:20.167 Controller Busy Time: 0 minutes 00:11:20.167 Power Cycles: 0 00:11:20.167 Power On Hours: 0 hours 00:11:20.167 Unsafe Shutdowns: 0 00:11:20.167 Unrecoverable Media Errors: 0 00:11:20.167 Lifetime Error Log Entries: 0 00:11:20.167 Warning Temperature Time: 0 minutes 00:11:20.167 Critical Temperature Time: 0 minutes 00:11:20.167 00:11:20.167 Number of Queues 00:11:20.167 ================ 00:11:20.167 Number of I/O Submission Queues: 64 00:11:20.167 Number of I/O Completion Queues: 64 00:11:20.167 00:11:20.167 ZNS Specific Controller Data 00:11:20.167 ============================ 00:11:20.167 Zone Append Size Limit: 0 00:11:20.167 00:11:20.167 00:11:20.167 Active Namespaces 00:11:20.167 ================= 00:11:20.167 Namespace ID:1 00:11:20.167 Error Recovery Timeout: Unlimited 00:11:20.167 Command Set Identifier: NVM (00h) 00:11:20.167 Deallocate: Supported 00:11:20.167 Deallocated/Unwritten Error: Supported 00:11:20.167 Deallocated Read Value: All 0x00 00:11:20.167 Deallocate in Write Zeroes: Not Supported 00:11:20.167 Deallocated Guard Field: 0xFFFF 00:11:20.167 Flush: Supported 00:11:20.167 Reservation: Not Supported 00:11:20.167 Namespace Sharing Capabilities: Private 00:11:20.167 Size (in LBAs): 1048576 (4GiB) 00:11:20.167 Capacity (in LBAs): 1048576 (4GiB) 00:11:20.167 Utilization (in LBAs): 1048576 (4GiB) 00:11:20.167 Thin Provisioning: Not Supported 00:11:20.167 Per-NS Atomic Units: No 00:11:20.167 Maximum Single Source Range Length: 128 00:11:20.167 Maximum Copy Length: 128 00:11:20.167 Maximum Source Range Count: 128 00:11:20.167 NGUID/EUI64 Never Reused: No 00:11:20.167 Namespace Write Protected: No 00:11:20.167 Number of LBA Formats: 8 00:11:20.167 Current LBA Format: LBA Format #04 00:11:20.167 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:20.167 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:20.167 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:20.167 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:20.167 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:20.167 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:20.167 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:20.167 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:20.167 00:11:20.167 NVM Specific Namespace Data 00:11:20.167 =========================== 00:11:20.167 Logical Block Storage Tag Mask: 0 00:11:20.167 Protection Information Capabilities: 00:11:20.167 16b Guard Protection Information Storage Tag Support: No 00:11:20.167 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:20.167 Storage Tag Check Read Support: No 00:11:20.167 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Namespace ID:2 00:11:20.167 Error Recovery Timeout: Unlimited 00:11:20.167 Command Set Identifier: NVM (00h) 00:11:20.167 Deallocate: Supported 00:11:20.167 Deallocated/Unwritten Error: Supported 00:11:20.167 Deallocated Read Value: All 0x00 00:11:20.167 Deallocate in Write Zeroes: Not Supported 00:11:20.167 Deallocated Guard Field: 0xFFFF 00:11:20.167 Flush: Supported 00:11:20.167 Reservation: Not Supported 00:11:20.167 Namespace Sharing Capabilities: Private 00:11:20.167 Size (in LBAs): 1048576 (4GiB) 00:11:20.167 Capacity (in LBAs): 1048576 (4GiB) 00:11:20.167 Utilization (in LBAs): 1048576 (4GiB) 00:11:20.167 Thin Provisioning: Not Supported 00:11:20.167 Per-NS Atomic Units: No 00:11:20.167 Maximum Single Source Range Length: 128 00:11:20.167 Maximum Copy Length: 128 00:11:20.167 Maximum Source Range Count: 128 00:11:20.167 NGUID/EUI64 Never Reused: No 00:11:20.167 Namespace Write Protected: No 00:11:20.167 Number of LBA Formats: 8 00:11:20.167 Current LBA Format: LBA Format #04 00:11:20.167 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:20.167 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:20.167 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:20.167 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:20.167 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:20.167 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:20.167 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:20.167 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:20.167 00:11:20.167 NVM Specific Namespace Data 00:11:20.167 =========================== 00:11:20.167 Logical Block Storage Tag Mask: 0 00:11:20.167 Protection Information Capabilities: 00:11:20.167 16b Guard Protection Information Storage Tag Support: No 00:11:20.167 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:20.167 Storage Tag Check Read Support: No 00:11:20.167 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.167 Namespace ID:3 00:11:20.167 Error Recovery Timeout: Unlimited 00:11:20.167 Command Set Identifier: NVM (00h) 00:11:20.167 Deallocate: Supported 00:11:20.167 Deallocated/Unwritten Error: Supported 00:11:20.167 Deallocated Read Value: All 0x00 00:11:20.167 Deallocate in Write Zeroes: Not Supported 00:11:20.167 Deallocated Guard Field: 0xFFFF 00:11:20.167 Flush: Supported 00:11:20.167 Reservation: Not Supported 00:11:20.167 Namespace Sharing Capabilities: Private 00:11:20.167 Size (in LBAs): 1048576 (4GiB) 00:11:20.167 Capacity (in LBAs): 1048576 (4GiB) 00:11:20.167 Utilization (in LBAs): 1048576 (4GiB) 00:11:20.167 Thin Provisioning: Not Supported 00:11:20.167 Per-NS Atomic Units: No 00:11:20.168 Maximum Single Source Range Length: 128 00:11:20.168 Maximum Copy Length: 128 00:11:20.168 Maximum Source Range Count: 128 00:11:20.168 NGUID/EUI64 Never Reused: No 00:11:20.168 Namespace Write Protected: No 00:11:20.168 Number of LBA Formats: 8 00:11:20.168 Current LBA Format: LBA Format #04 00:11:20.168 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:20.168 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:20.168 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:20.168 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:20.168 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:20.168 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:20.168 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:20.168 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:20.168 00:11:20.168 NVM Specific Namespace Data 00:11:20.168 =========================== 00:11:20.168 Logical Block Storage Tag Mask: 0 00:11:20.168 Protection Information Capabilities: 00:11:20.168 16b Guard Protection Information Storage Tag Support: No 00:11:20.168 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:20.168 Storage Tag Check Read Support: No 00:11:20.168 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.168 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.168 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.168 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.168 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.168 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.168 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.168 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.168 08:29:59 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:20.168 08:29:59 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:20.427 ===================================================== 00:11:20.427 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:20.427 ===================================================== 00:11:20.427 Controller Capabilities/Features 00:11:20.427 ================================ 00:11:20.427 Vendor ID: 1b36 00:11:20.427 Subsystem Vendor ID: 1af4 00:11:20.427 Serial Number: 12343 00:11:20.427 Model Number: QEMU NVMe Ctrl 00:11:20.427 Firmware Version: 8.0.0 00:11:20.427 Recommended Arb Burst: 6 00:11:20.427 IEEE OUI Identifier: 00 54 52 00:11:20.427 Multi-path I/O 00:11:20.427 May have multiple subsystem ports: No 00:11:20.427 May have multiple controllers: Yes 00:11:20.427 Associated with SR-IOV VF: No 00:11:20.427 Max Data Transfer Size: 524288 00:11:20.427 Max Number of Namespaces: 256 00:11:20.427 Max Number of I/O Queues: 64 00:11:20.427 NVMe Specification Version (VS): 1.4 00:11:20.427 NVMe Specification Version (Identify): 1.4 00:11:20.427 Maximum Queue Entries: 2048 00:11:20.427 Contiguous Queues Required: Yes 00:11:20.427 Arbitration Mechanisms Supported 00:11:20.427 Weighted Round Robin: Not Supported 00:11:20.427 Vendor Specific: Not Supported 00:11:20.427 Reset Timeout: 7500 ms 00:11:20.427 Doorbell Stride: 4 bytes 00:11:20.427 NVM Subsystem Reset: Not Supported 00:11:20.427 Command Sets Supported 00:11:20.427 NVM Command Set: Supported 00:11:20.427 Boot Partition: Not Supported 00:11:20.427 Memory Page Size Minimum: 4096 bytes 00:11:20.427 Memory Page Size Maximum: 65536 bytes 00:11:20.427 Persistent Memory Region: Not Supported 00:11:20.427 Optional Asynchronous Events Supported 00:11:20.427 Namespace Attribute Notices: Supported 00:11:20.427 Firmware Activation Notices: Not Supported 00:11:20.427 ANA Change Notices: Not Supported 00:11:20.427 PLE Aggregate Log Change Notices: Not Supported 00:11:20.427 LBA Status Info Alert Notices: Not Supported 00:11:20.427 EGE Aggregate Log Change Notices: Not Supported 00:11:20.427 Normal NVM Subsystem Shutdown event: Not Supported 00:11:20.427 Zone Descriptor Change Notices: Not Supported 00:11:20.427 Discovery Log Change Notices: Not Supported 00:11:20.427 Controller Attributes 00:11:20.427 128-bit Host Identifier: Not Supported 00:11:20.427 Non-Operational Permissive Mode: Not Supported 00:11:20.427 NVM Sets: Not Supported 00:11:20.427 Read Recovery Levels: Not Supported 00:11:20.427 Endurance Groups: Supported 00:11:20.427 Predictable Latency Mode: Not Supported 00:11:20.427 Traffic Based Keep ALive: Not Supported 00:11:20.427 Namespace Granularity: Not Supported 00:11:20.427 SQ Associations: Not Supported 00:11:20.427 UUID List: Not Supported 00:11:20.427 Multi-Domain Subsystem: Not Supported 00:11:20.427 Fixed Capacity Management: Not Supported 00:11:20.427 Variable Capacity Management: Not Supported 00:11:20.427 Delete Endurance Group: Not Supported 00:11:20.427 Delete NVM Set: Not Supported 00:11:20.427 Extended LBA Formats Supported: Supported 00:11:20.427 Flexible Data Placement Supported: Supported 00:11:20.427 00:11:20.427 Controller Memory Buffer Support 00:11:20.427 ================================ 00:11:20.427 Supported: No 00:11:20.427 00:11:20.427 Persistent Memory Region Support 00:11:20.427 ================================ 00:11:20.427 Supported: No 00:11:20.427 00:11:20.427 Admin Command Set Attributes 00:11:20.427 ============================ 00:11:20.427 Security Send/Receive: Not Supported 00:11:20.427 Format NVM: Supported 00:11:20.427 Firmware Activate/Download: Not Supported 00:11:20.427 Namespace Management: Supported 00:11:20.427 Device Self-Test: Not Supported 00:11:20.427 Directives: Supported 00:11:20.427 NVMe-MI: Not Supported 00:11:20.427 Virtualization Management: Not Supported 00:11:20.427 Doorbell Buffer Config: Supported 00:11:20.427 Get LBA Status Capability: Not Supported 00:11:20.427 Command & Feature Lockdown Capability: Not Supported 00:11:20.427 Abort Command Limit: 4 00:11:20.427 Async Event Request Limit: 4 00:11:20.427 Number of Firmware Slots: N/A 00:11:20.427 Firmware Slot 1 Read-Only: N/A 00:11:20.427 Firmware Activation Without Reset: N/A 00:11:20.427 Multiple Update Detection Support: N/A 00:11:20.427 Firmware Update Granularity: No Information Provided 00:11:20.427 Per-Namespace SMART Log: Yes 00:11:20.427 Asymmetric Namespace Access Log Page: Not Supported 00:11:20.427 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:20.427 Command Effects Log Page: Supported 00:11:20.427 Get Log Page Extended Data: Supported 00:11:20.427 Telemetry Log Pages: Not Supported 00:11:20.427 Persistent Event Log Pages: Not Supported 00:11:20.427 Supported Log Pages Log Page: May Support 00:11:20.427 Commands Supported & Effects Log Page: Not Supported 00:11:20.427 Feature Identifiers & Effects Log Page:May Support 00:11:20.427 NVMe-MI Commands & Effects Log Page: May Support 00:11:20.427 Data Area 4 for Telemetry Log: Not Supported 00:11:20.427 Error Log Page Entries Supported: 1 00:11:20.427 Keep Alive: Not Supported 00:11:20.427 00:11:20.427 NVM Command Set Attributes 00:11:20.427 ========================== 00:11:20.427 Submission Queue Entry Size 00:11:20.427 Max: 64 00:11:20.427 Min: 64 00:11:20.427 Completion Queue Entry Size 00:11:20.427 Max: 16 00:11:20.427 Min: 16 00:11:20.427 Number of Namespaces: 256 00:11:20.427 Compare Command: Supported 00:11:20.427 Write Uncorrectable Command: Not Supported 00:11:20.427 Dataset Management Command: Supported 00:11:20.427 Write Zeroes Command: Supported 00:11:20.427 Set Features Save Field: Supported 00:11:20.427 Reservations: Not Supported 00:11:20.427 Timestamp: Supported 00:11:20.427 Copy: Supported 00:11:20.427 Volatile Write Cache: Present 00:11:20.427 Atomic Write Unit (Normal): 1 00:11:20.427 Atomic Write Unit (PFail): 1 00:11:20.427 Atomic Compare & Write Unit: 1 00:11:20.427 Fused Compare & Write: Not Supported 00:11:20.427 Scatter-Gather List 00:11:20.427 SGL Command Set: Supported 00:11:20.427 SGL Keyed: Not Supported 00:11:20.427 SGL Bit Bucket Descriptor: Not Supported 00:11:20.427 SGL Metadata Pointer: Not Supported 00:11:20.427 Oversized SGL: Not Supported 00:11:20.427 SGL Metadata Address: Not Supported 00:11:20.427 SGL Offset: Not Supported 00:11:20.427 Transport SGL Data Block: Not Supported 00:11:20.428 Replay Protected Memory Block: Not Supported 00:11:20.428 00:11:20.428 Firmware Slot Information 00:11:20.428 ========================= 00:11:20.428 Active slot: 1 00:11:20.428 Slot 1 Firmware Revision: 1.0 00:11:20.428 00:11:20.428 00:11:20.428 Commands Supported and Effects 00:11:20.428 ============================== 00:11:20.428 Admin Commands 00:11:20.428 -------------- 00:11:20.428 Delete I/O Submission Queue (00h): Supported 00:11:20.428 Create I/O Submission Queue (01h): Supported 00:11:20.428 Get Log Page (02h): Supported 00:11:20.428 Delete I/O Completion Queue (04h): Supported 00:11:20.428 Create I/O Completion Queue (05h): Supported 00:11:20.428 Identify (06h): Supported 00:11:20.428 Abort (08h): Supported 00:11:20.428 Set Features (09h): Supported 00:11:20.428 Get Features (0Ah): Supported 00:11:20.428 Asynchronous Event Request (0Ch): Supported 00:11:20.428 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:20.428 Directive Send (19h): Supported 00:11:20.428 Directive Receive (1Ah): Supported 00:11:20.428 Virtualization Management (1Ch): Supported 00:11:20.428 Doorbell Buffer Config (7Ch): Supported 00:11:20.428 Format NVM (80h): Supported LBA-Change 00:11:20.428 I/O Commands 00:11:20.428 ------------ 00:11:20.428 Flush (00h): Supported LBA-Change 00:11:20.428 Write (01h): Supported LBA-Change 00:11:20.428 Read (02h): Supported 00:11:20.428 Compare (05h): Supported 00:11:20.428 Write Zeroes (08h): Supported LBA-Change 00:11:20.428 Dataset Management (09h): Supported LBA-Change 00:11:20.428 Unknown (0Ch): Supported 00:11:20.428 Unknown (12h): Supported 00:11:20.428 Copy (19h): Supported LBA-Change 00:11:20.428 Unknown (1Dh): Supported LBA-Change 00:11:20.428 00:11:20.428 Error Log 00:11:20.428 ========= 00:11:20.428 00:11:20.428 Arbitration 00:11:20.428 =========== 00:11:20.428 Arbitration Burst: no limit 00:11:20.428 00:11:20.428 Power Management 00:11:20.428 ================ 00:11:20.428 Number of Power States: 1 00:11:20.428 Current Power State: Power State #0 00:11:20.428 Power State #0: 00:11:20.428 Max Power: 25.00 W 00:11:20.428 Non-Operational State: Operational 00:11:20.428 Entry Latency: 16 microseconds 00:11:20.428 Exit Latency: 4 microseconds 00:11:20.428 Relative Read Throughput: 0 00:11:20.428 Relative Read Latency: 0 00:11:20.428 Relative Write Throughput: 0 00:11:20.428 Relative Write Latency: 0 00:11:20.428 Idle Power: Not Reported 00:11:20.428 Active Power: Not Reported 00:11:20.428 Non-Operational Permissive Mode: Not Supported 00:11:20.428 00:11:20.428 Health Information 00:11:20.428 ================== 00:11:20.428 Critical Warnings: 00:11:20.428 Available Spare Space: OK 00:11:20.428 Temperature: OK 00:11:20.428 Device Reliability: OK 00:11:20.428 Read Only: No 00:11:20.428 Volatile Memory Backup: OK 00:11:20.428 Current Temperature: 323 Kelvin (50 Celsius) 00:11:20.428 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:20.428 Available Spare: 0% 00:11:20.428 Available Spare Threshold: 0% 00:11:20.428 Life Percentage Used: 0% 00:11:20.428 Data Units Read: 769 00:11:20.428 Data Units Written: 698 00:11:20.428 Host Read Commands: 34468 00:11:20.428 Host Write Commands: 33891 00:11:20.428 Controller Busy Time: 0 minutes 00:11:20.428 Power Cycles: 0 00:11:20.428 Power On Hours: 0 hours 00:11:20.428 Unsafe Shutdowns: 0 00:11:20.428 Unrecoverable Media Errors: 0 00:11:20.428 Lifetime Error Log Entries: 0 00:11:20.428 Warning Temperature Time: 0 minutes 00:11:20.428 Critical Temperature Time: 0 minutes 00:11:20.428 00:11:20.428 Number of Queues 00:11:20.428 ================ 00:11:20.428 Number of I/O Submission Queues: 64 00:11:20.428 Number of I/O Completion Queues: 64 00:11:20.428 00:11:20.428 ZNS Specific Controller Data 00:11:20.428 ============================ 00:11:20.428 Zone Append Size Limit: 0 00:11:20.428 00:11:20.428 00:11:20.428 Active Namespaces 00:11:20.428 ================= 00:11:20.428 Namespace ID:1 00:11:20.428 Error Recovery Timeout: Unlimited 00:11:20.428 Command Set Identifier: NVM (00h) 00:11:20.428 Deallocate: Supported 00:11:20.428 Deallocated/Unwritten Error: Supported 00:11:20.428 Deallocated Read Value: All 0x00 00:11:20.428 Deallocate in Write Zeroes: Not Supported 00:11:20.428 Deallocated Guard Field: 0xFFFF 00:11:20.428 Flush: Supported 00:11:20.428 Reservation: Not Supported 00:11:20.428 Namespace Sharing Capabilities: Multiple Controllers 00:11:20.428 Size (in LBAs): 262144 (1GiB) 00:11:20.428 Capacity (in LBAs): 262144 (1GiB) 00:11:20.428 Utilization (in LBAs): 262144 (1GiB) 00:11:20.428 Thin Provisioning: Not Supported 00:11:20.428 Per-NS Atomic Units: No 00:11:20.428 Maximum Single Source Range Length: 128 00:11:20.428 Maximum Copy Length: 128 00:11:20.428 Maximum Source Range Count: 128 00:11:20.428 NGUID/EUI64 Never Reused: No 00:11:20.428 Namespace Write Protected: No 00:11:20.428 Endurance group ID: 1 00:11:20.428 Number of LBA Formats: 8 00:11:20.428 Current LBA Format: LBA Format #04 00:11:20.428 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:20.428 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:20.428 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:20.428 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:20.428 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:20.428 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:20.428 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:20.428 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:20.428 00:11:20.428 Get Feature FDP: 00:11:20.428 ================ 00:11:20.428 Enabled: Yes 00:11:20.428 FDP configuration index: 0 00:11:20.428 00:11:20.428 FDP configurations log page 00:11:20.428 =========================== 00:11:20.428 Number of FDP configurations: 1 00:11:20.428 Version: 0 00:11:20.428 Size: 112 00:11:20.428 FDP Configuration Descriptor: 0 00:11:20.428 Descriptor Size: 96 00:11:20.428 Reclaim Group Identifier format: 2 00:11:20.428 FDP Volatile Write Cache: Not Present 00:11:20.428 FDP Configuration: Valid 00:11:20.428 Vendor Specific Size: 0 00:11:20.428 Number of Reclaim Groups: 2 00:11:20.428 Number of Recalim Unit Handles: 8 00:11:20.428 Max Placement Identifiers: 128 00:11:20.428 Number of Namespaces Suppprted: 256 00:11:20.428 Reclaim unit Nominal Size: 6000000 bytes 00:11:20.428 Estimated Reclaim Unit Time Limit: Not Reported 00:11:20.428 RUH Desc #000: RUH Type: Initially Isolated 00:11:20.428 RUH Desc #001: RUH Type: Initially Isolated 00:11:20.428 RUH Desc #002: RUH Type: Initially Isolated 00:11:20.428 RUH Desc #003: RUH Type: Initially Isolated 00:11:20.428 RUH Desc #004: RUH Type: Initially Isolated 00:11:20.429 RUH Desc #005: RUH Type: Initially Isolated 00:11:20.429 RUH Desc #006: RUH Type: Initially Isolated 00:11:20.429 RUH Desc #007: RUH Type: Initially Isolated 00:11:20.429 00:11:20.429 FDP reclaim unit handle usage log page 00:11:20.429 ====================================== 00:11:20.429 Number of Reclaim Unit Handles: 8 00:11:20.429 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:20.429 RUH Usage Desc #001: RUH Attributes: Unused 00:11:20.429 RUH Usage Desc #002: RUH Attributes: Unused 00:11:20.429 RUH Usage Desc #003: RUH Attributes: Unused 00:11:20.429 RUH Usage Desc #004: RUH Attributes: Unused 00:11:20.429 RUH Usage Desc #005: RUH Attributes: Unused 00:11:20.429 RUH Usage Desc #006: RUH Attributes: Unused 00:11:20.429 RUH Usage Desc #007: RUH Attributes: Unused 00:11:20.429 00:11:20.429 FDP statistics log page 00:11:20.429 ======================= 00:11:20.429 Host bytes with metadata written: 427859968 00:11:20.429 Media bytes with metadata written: 427925504 00:11:20.429 Media bytes erased: 0 00:11:20.429 00:11:20.429 FDP events log page 00:11:20.429 =================== 00:11:20.429 Number of FDP events: 0 00:11:20.429 00:11:20.429 NVM Specific Namespace Data 00:11:20.429 =========================== 00:11:20.429 Logical Block Storage Tag Mask: 0 00:11:20.429 Protection Information Capabilities: 00:11:20.429 16b Guard Protection Information Storage Tag Support: No 00:11:20.429 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:20.429 Storage Tag Check Read Support: No 00:11:20.429 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.429 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.429 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.429 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.429 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.429 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.429 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.429 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:20.429 00:11:20.429 real 0m1.829s 00:11:20.429 user 0m0.738s 00:11:20.429 sys 0m0.842s 00:11:20.429 08:29:59 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.429 08:29:59 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:20.429 ************************************ 00:11:20.429 END TEST nvme_identify 00:11:20.429 ************************************ 00:11:20.429 08:29:59 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:20.429 08:29:59 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:20.429 08:29:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.429 08:29:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.687 ************************************ 00:11:20.687 START TEST nvme_perf 00:11:20.687 ************************************ 00:11:20.687 08:29:59 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:11:20.687 08:29:59 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:22.110 Initializing NVMe Controllers 00:11:22.110 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:22.110 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:22.110 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:22.110 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:22.110 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:22.110 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:22.110 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:22.110 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:22.110 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:22.110 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:22.110 Initialization complete. Launching workers. 00:11:22.110 ======================================================== 00:11:22.110 Latency(us) 00:11:22.110 Device Information : IOPS MiB/s Average min max 00:11:22.110 PCIE (0000:00:10.0) NSID 1 from core 0: 13692.58 160.46 9359.55 7571.21 36986.85 00:11:22.110 PCIE (0000:00:11.0) NSID 1 from core 0: 13692.58 160.46 9336.95 7667.54 34113.97 00:11:22.110 PCIE (0000:00:13.0) NSID 1 from core 0: 13692.58 160.46 9312.82 7713.53 31856.75 00:11:22.110 PCIE (0000:00:12.0) NSID 1 from core 0: 13692.58 160.46 9288.27 7654.84 29090.58 00:11:22.110 PCIE (0000:00:12.0) NSID 2 from core 0: 13692.58 160.46 9263.37 7660.95 26302.86 00:11:22.110 PCIE (0000:00:12.0) NSID 3 from core 0: 13692.58 160.46 9238.57 7658.84 23465.33 00:11:22.110 ======================================================== 00:11:22.110 Total : 82155.46 962.76 9299.92 7571.21 36986.85 00:11:22.110 00:11:22.110 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:22.110 ================================================================================= 00:11:22.110 1.00000% : 7804.742us 00:11:22.110 10.00000% : 8162.211us 00:11:22.110 25.00000% : 8519.680us 00:11:22.110 50.00000% : 8996.305us 00:11:22.110 75.00000% : 9651.665us 00:11:22.110 90.00000% : 10366.604us 00:11:22.110 95.00000% : 11260.276us 00:11:22.110 98.00000% : 12511.418us 00:11:22.110 99.00000% : 14417.920us 00:11:22.110 99.50000% : 29789.091us 00:11:22.110 99.90000% : 36700.160us 00:11:22.110 99.99000% : 36938.473us 00:11:22.110 99.99900% : 37176.785us 00:11:22.110 99.99990% : 37176.785us 00:11:22.110 99.99999% : 37176.785us 00:11:22.110 00:11:22.110 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:22.110 ================================================================================= 00:11:22.110 1.00000% : 7923.898us 00:11:22.110 10.00000% : 8221.789us 00:11:22.110 25.00000% : 8519.680us 00:11:22.110 50.00000% : 8996.305us 00:11:22.110 75.00000% : 9651.665us 00:11:22.110 90.00000% : 10307.025us 00:11:22.110 95.00000% : 11260.276us 00:11:22.110 98.00000% : 12511.418us 00:11:22.110 99.00000% : 14775.389us 00:11:22.110 99.50000% : 27525.120us 00:11:22.110 99.90000% : 33840.407us 00:11:22.110 99.99000% : 34078.720us 00:11:22.110 99.99900% : 34317.033us 00:11:22.111 99.99990% : 34317.033us 00:11:22.111 99.99999% : 34317.033us 00:11:22.111 00:11:22.111 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:22.111 ================================================================================= 00:11:22.111 1.00000% : 7923.898us 00:11:22.111 10.00000% : 8221.789us 00:11:22.111 25.00000% : 8519.680us 00:11:22.111 50.00000% : 8996.305us 00:11:22.111 75.00000% : 9651.665us 00:11:22.111 90.00000% : 10307.025us 00:11:22.111 95.00000% : 11260.276us 00:11:22.111 98.00000% : 12570.996us 00:11:22.111 99.00000% : 14000.873us 00:11:22.111 99.50000% : 25261.149us 00:11:22.111 99.90000% : 31457.280us 00:11:22.111 99.99000% : 31933.905us 00:11:22.111 99.99900% : 31933.905us 00:11:22.111 99.99990% : 31933.905us 00:11:22.111 99.99999% : 31933.905us 00:11:22.111 00:11:22.111 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:22.111 ================================================================================= 00:11:22.111 1.00000% : 7923.898us 00:11:22.111 10.00000% : 8221.789us 00:11:22.111 25.00000% : 8519.680us 00:11:22.111 50.00000% : 8996.305us 00:11:22.111 75.00000% : 9651.665us 00:11:22.111 90.00000% : 10307.025us 00:11:22.111 95.00000% : 11200.698us 00:11:22.111 98.00000% : 12630.575us 00:11:22.111 99.00000% : 13881.716us 00:11:22.111 99.50000% : 22520.553us 00:11:22.111 99.90000% : 28597.527us 00:11:22.111 99.99000% : 29074.153us 00:11:22.111 99.99900% : 29193.309us 00:11:22.111 99.99990% : 29193.309us 00:11:22.111 99.99999% : 29193.309us 00:11:22.111 00:11:22.111 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:22.111 ================================================================================= 00:11:22.111 1.00000% : 7923.898us 00:11:22.111 10.00000% : 8221.789us 00:11:22.111 25.00000% : 8519.680us 00:11:22.111 50.00000% : 8996.305us 00:11:22.111 75.00000% : 9651.665us 00:11:22.111 90.00000% : 10307.025us 00:11:22.111 95.00000% : 11141.120us 00:11:22.111 98.00000% : 12511.418us 00:11:22.111 99.00000% : 13762.560us 00:11:22.111 99.50000% : 19660.800us 00:11:22.111 99.90000% : 25856.931us 00:11:22.111 99.99000% : 26333.556us 00:11:22.111 99.99900% : 26333.556us 00:11:22.111 99.99990% : 26333.556us 00:11:22.111 99.99999% : 26333.556us 00:11:22.111 00:11:22.111 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:22.111 ================================================================================= 00:11:22.111 1.00000% : 7923.898us 00:11:22.111 10.00000% : 8221.789us 00:11:22.111 25.00000% : 8519.680us 00:11:22.111 50.00000% : 8996.305us 00:11:22.111 75.00000% : 9651.665us 00:11:22.111 90.00000% : 10307.025us 00:11:22.111 95.00000% : 11200.698us 00:11:22.111 98.00000% : 12451.840us 00:11:22.111 99.00000% : 13941.295us 00:11:22.111 99.50000% : 16920.204us 00:11:22.111 99.90000% : 22997.178us 00:11:22.111 99.99000% : 23473.804us 00:11:22.111 99.99900% : 23473.804us 00:11:22.111 99.99990% : 23473.804us 00:11:22.111 99.99999% : 23473.804us 00:11:22.111 00:11:22.111 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:22.111 ============================================================================== 00:11:22.111 Range in us Cumulative IO count 00:11:22.111 7566.429 - 7596.218: 0.0073% ( 1) 00:11:22.111 7596.218 - 7626.007: 0.0584% ( 7) 00:11:22.111 7626.007 - 7685.585: 0.1971% ( 19) 00:11:22.111 7685.585 - 7745.164: 0.4527% ( 35) 00:11:22.111 7745.164 - 7804.742: 1.0514% ( 82) 00:11:22.111 7804.742 - 7864.320: 2.0444% ( 136) 00:11:22.111 7864.320 - 7923.898: 3.3002% ( 172) 00:11:22.111 7923.898 - 7983.476: 4.8773% ( 216) 00:11:22.111 7983.476 - 8043.055: 6.7684% ( 259) 00:11:22.111 8043.055 - 8102.633: 8.6887% ( 263) 00:11:22.111 8102.633 - 8162.211: 10.9886% ( 315) 00:11:22.111 8162.211 - 8221.789: 13.2301% ( 307) 00:11:22.111 8221.789 - 8281.367: 15.6031% ( 325) 00:11:22.111 8281.367 - 8340.945: 18.1951% ( 355) 00:11:22.111 8340.945 - 8400.524: 20.7652% ( 352) 00:11:22.111 8400.524 - 8460.102: 23.2988% ( 347) 00:11:22.111 8460.102 - 8519.680: 25.9930% ( 369) 00:11:22.111 8519.680 - 8579.258: 28.8332% ( 389) 00:11:22.111 8579.258 - 8638.836: 31.7319% ( 397) 00:11:22.111 8638.836 - 8698.415: 34.9591% ( 442) 00:11:22.111 8698.415 - 8757.993: 38.1571% ( 438) 00:11:22.111 8757.993 - 8817.571: 41.5012% ( 458) 00:11:22.111 8817.571 - 8877.149: 44.7503% ( 445) 00:11:22.111 8877.149 - 8936.727: 48.0286% ( 449) 00:11:22.111 8936.727 - 8996.305: 51.2631% ( 443) 00:11:22.111 8996.305 - 9055.884: 54.4904% ( 442) 00:11:22.111 9055.884 - 9115.462: 57.4985% ( 412) 00:11:22.111 9115.462 - 9175.040: 60.2585% ( 378) 00:11:22.111 9175.040 - 9234.618: 62.7848% ( 346) 00:11:22.111 9234.618 - 9294.196: 65.1431% ( 323) 00:11:22.111 9294.196 - 9353.775: 67.4504% ( 316) 00:11:22.111 9353.775 - 9413.353: 69.4217% ( 270) 00:11:22.111 9413.353 - 9472.931: 71.1084% ( 231) 00:11:22.111 9472.931 - 9532.509: 72.8680% ( 241) 00:11:22.111 9532.509 - 9592.087: 74.4743% ( 220) 00:11:22.111 9592.087 - 9651.665: 76.1755% ( 233) 00:11:22.111 9651.665 - 9711.244: 77.7307% ( 213) 00:11:22.111 9711.244 - 9770.822: 79.3589% ( 223) 00:11:22.111 9770.822 - 9830.400: 80.7900% ( 196) 00:11:22.111 9830.400 - 9889.978: 82.3160% ( 209) 00:11:22.111 9889.978 - 9949.556: 83.7690% ( 199) 00:11:22.111 9949.556 - 10009.135: 85.1489% ( 189) 00:11:22.111 10009.135 - 10068.713: 86.3099% ( 159) 00:11:22.111 10068.713 - 10128.291: 87.3905% ( 148) 00:11:22.111 10128.291 - 10187.869: 88.2959% ( 124) 00:11:22.111 10187.869 - 10247.447: 89.1355% ( 115) 00:11:22.111 10247.447 - 10307.025: 89.8291% ( 95) 00:11:22.111 10307.025 - 10366.604: 90.4644% ( 87) 00:11:22.111 10366.604 - 10426.182: 90.9390% ( 65) 00:11:22.111 10426.182 - 10485.760: 91.4062% ( 64) 00:11:22.111 10485.760 - 10545.338: 91.8151% ( 56) 00:11:22.111 10545.338 - 10604.916: 92.1729% ( 49) 00:11:22.111 10604.916 - 10664.495: 92.5161% ( 47) 00:11:22.111 10664.495 - 10724.073: 92.8957% ( 52) 00:11:22.111 10724.073 - 10783.651: 93.2681% ( 51) 00:11:22.111 10783.651 - 10843.229: 93.5383% ( 37) 00:11:22.111 10843.229 - 10902.807: 93.7792% ( 33) 00:11:22.111 10902.807 - 10962.385: 94.0275% ( 34) 00:11:22.111 10962.385 - 11021.964: 94.2465% ( 30) 00:11:22.111 11021.964 - 11081.542: 94.4947% ( 34) 00:11:22.111 11081.542 - 11141.120: 94.7138% ( 30) 00:11:22.111 11141.120 - 11200.698: 94.8963% ( 25) 00:11:22.111 11200.698 - 11260.276: 95.1446% ( 34) 00:11:22.111 11260.276 - 11319.855: 95.3490% ( 28) 00:11:22.111 11319.855 - 11379.433: 95.5607% ( 29) 00:11:22.111 11379.433 - 11439.011: 95.7287% ( 23) 00:11:22.111 11439.011 - 11498.589: 95.9112% ( 25) 00:11:22.111 11498.589 - 11558.167: 96.1011% ( 26) 00:11:22.111 11558.167 - 11617.745: 96.2909% ( 26) 00:11:22.111 11617.745 - 11677.324: 96.4880% ( 27) 00:11:22.111 11677.324 - 11736.902: 96.6487% ( 22) 00:11:22.111 11736.902 - 11796.480: 96.8020% ( 21) 00:11:22.111 11796.480 - 11856.058: 96.9553% ( 21) 00:11:22.111 11856.058 - 11915.636: 97.1379% ( 25) 00:11:22.111 11915.636 - 11975.215: 97.2474% ( 15) 00:11:22.111 11975.215 - 12034.793: 97.3715% ( 17) 00:11:22.111 12034.793 - 12094.371: 97.4664% ( 13) 00:11:22.111 12094.371 - 12153.949: 97.5832% ( 16) 00:11:22.111 12153.949 - 12213.527: 97.6855% ( 14) 00:11:22.111 12213.527 - 12273.105: 97.7731% ( 12) 00:11:22.111 12273.105 - 12332.684: 97.8169% ( 6) 00:11:22.111 12332.684 - 12392.262: 97.8826% ( 9) 00:11:22.111 12392.262 - 12451.840: 97.9483% ( 9) 00:11:22.111 12451.840 - 12511.418: 98.0213% ( 10) 00:11:22.111 12511.418 - 12570.996: 98.0724% ( 7) 00:11:22.111 12570.996 - 12630.575: 98.1600% ( 12) 00:11:22.111 12630.575 - 12690.153: 98.2039% ( 6) 00:11:22.111 12690.153 - 12749.731: 98.2331% ( 4) 00:11:22.111 12749.731 - 12809.309: 98.3061% ( 10) 00:11:22.111 12809.309 - 12868.887: 98.3572% ( 7) 00:11:22.111 12868.887 - 12928.465: 98.3864% ( 4) 00:11:22.111 12928.465 - 12988.044: 98.4156% ( 4) 00:11:22.111 12988.044 - 13047.622: 98.4521% ( 5) 00:11:22.111 13047.622 - 13107.200: 98.4886% ( 5) 00:11:22.111 13107.200 - 13166.778: 98.5251% ( 5) 00:11:22.111 13166.778 - 13226.356: 98.5470% ( 3) 00:11:22.111 13226.356 - 13285.935: 98.5835% ( 5) 00:11:22.111 13285.935 - 13345.513: 98.6200% ( 5) 00:11:22.111 13345.513 - 13405.091: 98.6492% ( 4) 00:11:22.111 13405.091 - 13464.669: 98.6711% ( 3) 00:11:22.111 13464.669 - 13524.247: 98.7077% ( 5) 00:11:22.111 13524.247 - 13583.825: 98.7369% ( 4) 00:11:22.111 13583.825 - 13643.404: 98.7734% ( 5) 00:11:22.111 13643.404 - 13702.982: 98.8099% ( 5) 00:11:22.111 13702.982 - 13762.560: 98.8245% ( 2) 00:11:22.111 13762.560 - 13822.138: 98.8610% ( 5) 00:11:22.111 13822.138 - 13881.716: 98.8902% ( 4) 00:11:22.111 13881.716 - 13941.295: 98.8975% ( 1) 00:11:22.111 13941.295 - 14000.873: 98.9194% ( 3) 00:11:22.111 14000.873 - 14060.451: 98.9267% ( 1) 00:11:22.111 14060.451 - 14120.029: 98.9413% ( 2) 00:11:22.111 14120.029 - 14179.607: 98.9559% ( 2) 00:11:22.111 14179.607 - 14239.185: 98.9705% ( 2) 00:11:22.111 14239.185 - 14298.764: 98.9851% ( 2) 00:11:22.111 14298.764 - 14358.342: 98.9924% ( 1) 00:11:22.111 14358.342 - 14417.920: 99.0143% ( 3) 00:11:22.112 14417.920 - 14477.498: 99.0289% ( 2) 00:11:22.112 14477.498 - 14537.076: 99.0508% ( 3) 00:11:22.112 14537.076 - 14596.655: 99.0581% ( 1) 00:11:22.112 14596.655 - 14656.233: 99.0654% ( 1) 00:11:22.112 27405.964 - 27525.120: 99.0727% ( 1) 00:11:22.112 27525.120 - 27644.276: 99.0946% ( 3) 00:11:22.112 27644.276 - 27763.433: 99.1165% ( 3) 00:11:22.112 27763.433 - 27882.589: 99.1384% ( 3) 00:11:22.112 27882.589 - 28001.745: 99.1603% ( 3) 00:11:22.112 28001.745 - 28120.902: 99.1895% ( 4) 00:11:22.112 28120.902 - 28240.058: 99.2114% ( 3) 00:11:22.112 28240.058 - 28359.215: 99.2334% ( 3) 00:11:22.112 28359.215 - 28478.371: 99.2553% ( 3) 00:11:22.112 28478.371 - 28597.527: 99.2772% ( 3) 00:11:22.112 28597.527 - 28716.684: 99.2918% ( 2) 00:11:22.112 28716.684 - 28835.840: 99.3137% ( 3) 00:11:22.112 28835.840 - 28954.996: 99.3429% ( 4) 00:11:22.112 28954.996 - 29074.153: 99.3648% ( 3) 00:11:22.112 29074.153 - 29193.309: 99.3867% ( 3) 00:11:22.112 29193.309 - 29312.465: 99.4086% ( 3) 00:11:22.112 29312.465 - 29431.622: 99.4305% ( 3) 00:11:22.112 29431.622 - 29550.778: 99.4597% ( 4) 00:11:22.112 29550.778 - 29669.935: 99.4816% ( 3) 00:11:22.112 29669.935 - 29789.091: 99.5035% ( 3) 00:11:22.112 29789.091 - 29908.247: 99.5254% ( 3) 00:11:22.112 29908.247 - 30027.404: 99.5327% ( 1) 00:11:22.112 34317.033 - 34555.345: 99.5473% ( 2) 00:11:22.112 34555.345 - 34793.658: 99.5838% ( 5) 00:11:22.112 34793.658 - 35031.971: 99.6349% ( 7) 00:11:22.112 35031.971 - 35270.284: 99.6714% ( 5) 00:11:22.112 35270.284 - 35508.596: 99.7298% ( 8) 00:11:22.112 35508.596 - 35746.909: 99.7664% ( 5) 00:11:22.112 35746.909 - 35985.222: 99.8102% ( 6) 00:11:22.112 35985.222 - 36223.535: 99.8540% ( 6) 00:11:22.112 36223.535 - 36461.847: 99.8978% ( 6) 00:11:22.112 36461.847 - 36700.160: 99.9416% ( 6) 00:11:22.112 36700.160 - 36938.473: 99.9927% ( 7) 00:11:22.112 36938.473 - 37176.785: 100.0000% ( 1) 00:11:22.112 00:11:22.112 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:22.112 ============================================================================== 00:11:22.112 Range in us Cumulative IO count 00:11:22.112 7626.007 - 7685.585: 0.0146% ( 2) 00:11:22.112 7685.585 - 7745.164: 0.0730% ( 8) 00:11:22.112 7745.164 - 7804.742: 0.2994% ( 31) 00:11:22.112 7804.742 - 7864.320: 0.7447% ( 61) 00:11:22.112 7864.320 - 7923.898: 1.5260% ( 107) 00:11:22.112 7923.898 - 7983.476: 2.7380% ( 166) 00:11:22.112 7983.476 - 8043.055: 4.3662% ( 223) 00:11:22.112 8043.055 - 8102.633: 6.3741% ( 275) 00:11:22.112 8102.633 - 8162.211: 8.6522% ( 312) 00:11:22.112 8162.211 - 8221.789: 11.2150% ( 351) 00:11:22.112 8221.789 - 8281.367: 13.9092% ( 369) 00:11:22.112 8281.367 - 8340.945: 16.7640% ( 391) 00:11:22.112 8340.945 - 8400.524: 19.6481% ( 395) 00:11:22.112 8400.524 - 8460.102: 22.7804% ( 429) 00:11:22.112 8460.102 - 8519.680: 25.7739% ( 410) 00:11:22.112 8519.680 - 8579.258: 28.9136% ( 430) 00:11:22.112 8579.258 - 8638.836: 32.0605% ( 431) 00:11:22.112 8638.836 - 8698.415: 35.3461% ( 450) 00:11:22.112 8698.415 - 8757.993: 38.7923% ( 472) 00:11:22.112 8757.993 - 8817.571: 42.2313% ( 471) 00:11:22.112 8817.571 - 8877.149: 45.6557% ( 469) 00:11:22.112 8877.149 - 8936.727: 48.9486% ( 451) 00:11:22.112 8936.727 - 8996.305: 52.1831% ( 443) 00:11:22.112 8996.305 - 9055.884: 55.2643% ( 422) 00:11:22.112 9055.884 - 9115.462: 58.0827% ( 386) 00:11:22.112 9115.462 - 9175.040: 60.7331% ( 363) 00:11:22.112 9175.040 - 9234.618: 63.0549% ( 318) 00:11:22.112 9234.618 - 9294.196: 65.3256% ( 311) 00:11:22.112 9294.196 - 9353.775: 67.3189% ( 273) 00:11:22.112 9353.775 - 9413.353: 69.3852% ( 283) 00:11:22.112 9413.353 - 9472.931: 71.3055% ( 263) 00:11:22.112 9472.931 - 9532.509: 73.1235% ( 249) 00:11:22.112 9532.509 - 9592.087: 74.9854% ( 255) 00:11:22.112 9592.087 - 9651.665: 76.8838% ( 260) 00:11:22.112 9651.665 - 9711.244: 78.7383% ( 254) 00:11:22.112 9711.244 - 9770.822: 80.5856% ( 253) 00:11:22.112 9770.822 - 9830.400: 82.3014% ( 235) 00:11:22.112 9830.400 - 9889.978: 83.8712% ( 215) 00:11:22.112 9889.978 - 9949.556: 85.2366% ( 187) 00:11:22.112 9949.556 - 10009.135: 86.4632% ( 168) 00:11:22.112 10009.135 - 10068.713: 87.5511% ( 149) 00:11:22.112 10068.713 - 10128.291: 88.5295% ( 134) 00:11:22.112 10128.291 - 10187.869: 89.2742% ( 102) 00:11:22.112 10187.869 - 10247.447: 89.9314% ( 90) 00:11:22.112 10247.447 - 10307.025: 90.5374% ( 83) 00:11:22.112 10307.025 - 10366.604: 91.0193% ( 66) 00:11:22.112 10366.604 - 10426.182: 91.4574% ( 60) 00:11:22.112 10426.182 - 10485.760: 91.7932% ( 46) 00:11:22.112 10485.760 - 10545.338: 92.1218% ( 45) 00:11:22.112 10545.338 - 10604.916: 92.4357% ( 43) 00:11:22.112 10604.916 - 10664.495: 92.7643% ( 45) 00:11:22.112 10664.495 - 10724.073: 93.0637% ( 41) 00:11:22.112 10724.073 - 10783.651: 93.3265% ( 36) 00:11:22.112 10783.651 - 10843.229: 93.5821% ( 35) 00:11:22.112 10843.229 - 10902.807: 93.8011% ( 30) 00:11:22.112 10902.807 - 10962.385: 94.0567% ( 35) 00:11:22.112 10962.385 - 11021.964: 94.2757% ( 30) 00:11:22.112 11021.964 - 11081.542: 94.4801% ( 28) 00:11:22.112 11081.542 - 11141.120: 94.6773% ( 27) 00:11:22.112 11141.120 - 11200.698: 94.8890% ( 29) 00:11:22.112 11200.698 - 11260.276: 95.0350% ( 20) 00:11:22.112 11260.276 - 11319.855: 95.2176% ( 25) 00:11:22.112 11319.855 - 11379.433: 95.3782% ( 22) 00:11:22.112 11379.433 - 11439.011: 95.5534% ( 24) 00:11:22.112 11439.011 - 11498.589: 95.7579% ( 28) 00:11:22.112 11498.589 - 11558.167: 95.9477% ( 26) 00:11:22.112 11558.167 - 11617.745: 96.1230% ( 24) 00:11:22.112 11617.745 - 11677.324: 96.3493% ( 31) 00:11:22.112 11677.324 - 11736.902: 96.5318% ( 25) 00:11:22.112 11736.902 - 11796.480: 96.7071% ( 24) 00:11:22.112 11796.480 - 11856.058: 96.8823% ( 24) 00:11:22.112 11856.058 - 11915.636: 97.0283% ( 20) 00:11:22.112 11915.636 - 11975.215: 97.1817% ( 21) 00:11:22.112 11975.215 - 12034.793: 97.3350% ( 21) 00:11:22.112 12034.793 - 12094.371: 97.4664% ( 18) 00:11:22.112 12094.371 - 12153.949: 97.5832% ( 16) 00:11:22.112 12153.949 - 12213.527: 97.6636% ( 11) 00:11:22.112 12213.527 - 12273.105: 97.7293% ( 9) 00:11:22.112 12273.105 - 12332.684: 97.8242% ( 13) 00:11:22.112 12332.684 - 12392.262: 97.9045% ( 11) 00:11:22.112 12392.262 - 12451.840: 97.9848% ( 11) 00:11:22.112 12451.840 - 12511.418: 98.0724% ( 12) 00:11:22.112 12511.418 - 12570.996: 98.1235% ( 7) 00:11:22.112 12570.996 - 12630.575: 98.1966% ( 10) 00:11:22.112 12630.575 - 12690.153: 98.2550% ( 8) 00:11:22.112 12690.153 - 12749.731: 98.3207% ( 9) 00:11:22.112 12749.731 - 12809.309: 98.3499% ( 4) 00:11:22.112 12809.309 - 12868.887: 98.3864% ( 5) 00:11:22.112 12868.887 - 12928.465: 98.4083% ( 3) 00:11:22.112 12928.465 - 12988.044: 98.4229% ( 2) 00:11:22.112 12988.044 - 13047.622: 98.4448% ( 3) 00:11:22.112 13047.622 - 13107.200: 98.4521% ( 1) 00:11:22.112 13107.200 - 13166.778: 98.4667% ( 2) 00:11:22.112 13166.778 - 13226.356: 98.4886% ( 3) 00:11:22.112 13226.356 - 13285.935: 98.5105% ( 3) 00:11:22.112 13285.935 - 13345.513: 98.5251% ( 2) 00:11:22.112 13345.513 - 13405.091: 98.5397% ( 2) 00:11:22.112 13405.091 - 13464.669: 98.5835% ( 6) 00:11:22.112 13464.669 - 13524.247: 98.6127% ( 4) 00:11:22.112 13524.247 - 13583.825: 98.6492% ( 5) 00:11:22.112 13583.825 - 13643.404: 98.6638% ( 2) 00:11:22.112 13643.404 - 13702.982: 98.6784% ( 2) 00:11:22.112 13702.982 - 13762.560: 98.7004% ( 3) 00:11:22.112 13762.560 - 13822.138: 98.7150% ( 2) 00:11:22.112 13822.138 - 13881.716: 98.7296% ( 2) 00:11:22.112 13881.716 - 13941.295: 98.7515% ( 3) 00:11:22.112 13941.295 - 14000.873: 98.7661% ( 2) 00:11:22.112 14000.873 - 14060.451: 98.7807% ( 2) 00:11:22.112 14060.451 - 14120.029: 98.8026% ( 3) 00:11:22.112 14120.029 - 14179.607: 98.8172% ( 2) 00:11:22.112 14179.607 - 14239.185: 98.8318% ( 2) 00:11:22.112 14239.185 - 14298.764: 98.8464% ( 2) 00:11:22.112 14298.764 - 14358.342: 98.8683% ( 3) 00:11:22.112 14358.342 - 14417.920: 98.8902% ( 3) 00:11:22.112 14417.920 - 14477.498: 98.9121% ( 3) 00:11:22.112 14477.498 - 14537.076: 98.9267% ( 2) 00:11:22.112 14537.076 - 14596.655: 98.9413% ( 2) 00:11:22.112 14596.655 - 14656.233: 98.9632% ( 3) 00:11:22.112 14656.233 - 14715.811: 98.9851% ( 3) 00:11:22.112 14715.811 - 14775.389: 99.0070% ( 3) 00:11:22.112 14775.389 - 14834.967: 99.0289% ( 3) 00:11:22.112 14834.967 - 14894.545: 99.0435% ( 2) 00:11:22.112 14894.545 - 14954.124: 99.0581% ( 2) 00:11:22.112 14954.124 - 15013.702: 99.0654% ( 1) 00:11:22.112 25261.149 - 25380.305: 99.0800% ( 2) 00:11:22.112 25380.305 - 25499.462: 99.0946% ( 2) 00:11:22.112 25499.462 - 25618.618: 99.1238% ( 4) 00:11:22.112 25618.618 - 25737.775: 99.1384% ( 2) 00:11:22.112 25737.775 - 25856.931: 99.1676% ( 4) 00:11:22.112 25856.931 - 25976.087: 99.1895% ( 3) 00:11:22.112 25976.087 - 26095.244: 99.2188% ( 4) 00:11:22.112 26095.244 - 26214.400: 99.2407% ( 3) 00:11:22.112 26214.400 - 26333.556: 99.2699% ( 4) 00:11:22.112 26333.556 - 26452.713: 99.2918% ( 3) 00:11:22.112 26452.713 - 26571.869: 99.3137% ( 3) 00:11:22.112 26571.869 - 26691.025: 99.3429% ( 4) 00:11:22.112 26691.025 - 26810.182: 99.3648% ( 3) 00:11:22.112 26810.182 - 26929.338: 99.3940% ( 4) 00:11:22.112 26929.338 - 27048.495: 99.4086% ( 2) 00:11:22.112 27048.495 - 27167.651: 99.4305% ( 3) 00:11:22.112 27167.651 - 27286.807: 99.4524% ( 3) 00:11:22.113 27286.807 - 27405.964: 99.4816% ( 4) 00:11:22.113 27405.964 - 27525.120: 99.5035% ( 3) 00:11:22.113 27525.120 - 27644.276: 99.5254% ( 3) 00:11:22.113 27644.276 - 27763.433: 99.5327% ( 1) 00:11:22.113 31695.593 - 31933.905: 99.5473% ( 2) 00:11:22.113 31933.905 - 32172.218: 99.5984% ( 7) 00:11:22.113 32172.218 - 32410.531: 99.6495% ( 7) 00:11:22.113 32410.531 - 32648.844: 99.6933% ( 6) 00:11:22.113 32648.844 - 32887.156: 99.7445% ( 7) 00:11:22.113 32887.156 - 33125.469: 99.7956% ( 7) 00:11:22.113 33125.469 - 33363.782: 99.8394% ( 6) 00:11:22.113 33363.782 - 33602.095: 99.8905% ( 7) 00:11:22.113 33602.095 - 33840.407: 99.9416% ( 7) 00:11:22.113 33840.407 - 34078.720: 99.9927% ( 7) 00:11:22.113 34078.720 - 34317.033: 100.0000% ( 1) 00:11:22.113 00:11:22.113 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:22.113 ============================================================================== 00:11:22.113 Range in us Cumulative IO count 00:11:22.113 7685.585 - 7745.164: 0.0365% ( 5) 00:11:22.113 7745.164 - 7804.742: 0.2482% ( 29) 00:11:22.113 7804.742 - 7864.320: 0.7447% ( 68) 00:11:22.113 7864.320 - 7923.898: 1.5479% ( 110) 00:11:22.113 7923.898 - 7983.476: 2.8037% ( 172) 00:11:22.113 7983.476 - 8043.055: 4.4173% ( 221) 00:11:22.113 8043.055 - 8102.633: 6.4033% ( 272) 00:11:22.113 8102.633 - 8162.211: 8.7179% ( 317) 00:11:22.113 8162.211 - 8221.789: 11.2661% ( 349) 00:11:22.113 8221.789 - 8281.367: 13.8800% ( 358) 00:11:22.113 8281.367 - 8340.945: 16.6691% ( 382) 00:11:22.113 8340.945 - 8400.524: 19.6189% ( 404) 00:11:22.113 8400.524 - 8460.102: 22.6343% ( 413) 00:11:22.113 8460.102 - 8519.680: 25.7812% ( 431) 00:11:22.113 8519.680 - 8579.258: 29.0012% ( 441) 00:11:22.113 8579.258 - 8638.836: 32.2138% ( 440) 00:11:22.113 8638.836 - 8698.415: 35.5140% ( 452) 00:11:22.113 8698.415 - 8757.993: 39.0114% ( 479) 00:11:22.113 8757.993 - 8817.571: 42.3481% ( 457) 00:11:22.113 8817.571 - 8877.149: 45.6630% ( 454) 00:11:22.113 8877.149 - 8936.727: 48.9924% ( 456) 00:11:22.113 8936.727 - 8996.305: 52.1831% ( 437) 00:11:22.113 8996.305 - 9055.884: 55.2643% ( 422) 00:11:22.113 9055.884 - 9115.462: 58.1338% ( 393) 00:11:22.113 9115.462 - 9175.040: 60.7550% ( 359) 00:11:22.113 9175.040 - 9234.618: 63.2447% ( 341) 00:11:22.113 9234.618 - 9294.196: 65.5009% ( 309) 00:11:22.113 9294.196 - 9353.775: 67.5453% ( 280) 00:11:22.113 9353.775 - 9413.353: 69.4509% ( 261) 00:11:22.113 9413.353 - 9472.931: 71.3055% ( 254) 00:11:22.113 9472.931 - 9532.509: 73.1820% ( 257) 00:11:22.113 9532.509 - 9592.087: 74.9854% ( 247) 00:11:22.113 9592.087 - 9651.665: 76.7961% ( 248) 00:11:22.113 9651.665 - 9711.244: 78.5777% ( 244) 00:11:22.113 9711.244 - 9770.822: 80.3154% ( 238) 00:11:22.113 9770.822 - 9830.400: 82.0166% ( 233) 00:11:22.113 9830.400 - 9889.978: 83.5864% ( 215) 00:11:22.113 9889.978 - 9949.556: 84.9737% ( 190) 00:11:22.113 9949.556 - 10009.135: 86.1784% ( 165) 00:11:22.113 10009.135 - 10068.713: 87.2883% ( 152) 00:11:22.113 10068.713 - 10128.291: 88.1936% ( 124) 00:11:22.113 10128.291 - 10187.869: 88.9749% ( 107) 00:11:22.113 10187.869 - 10247.447: 89.6320% ( 90) 00:11:22.113 10247.447 - 10307.025: 90.2526% ( 85) 00:11:22.113 10307.025 - 10366.604: 90.8075% ( 76) 00:11:22.113 10366.604 - 10426.182: 91.2967% ( 67) 00:11:22.113 10426.182 - 10485.760: 91.7348% ( 60) 00:11:22.113 10485.760 - 10545.338: 92.1072% ( 51) 00:11:22.113 10545.338 - 10604.916: 92.4723% ( 50) 00:11:22.113 10604.916 - 10664.495: 92.7862% ( 43) 00:11:22.113 10664.495 - 10724.073: 93.1075% ( 44) 00:11:22.113 10724.073 - 10783.651: 93.3995% ( 40) 00:11:22.113 10783.651 - 10843.229: 93.6551% ( 35) 00:11:22.113 10843.229 - 10902.807: 93.8960% ( 33) 00:11:22.113 10902.807 - 10962.385: 94.1151% ( 30) 00:11:22.113 10962.385 - 11021.964: 94.3268% ( 29) 00:11:22.113 11021.964 - 11081.542: 94.5459% ( 30) 00:11:22.113 11081.542 - 11141.120: 94.7576% ( 29) 00:11:22.113 11141.120 - 11200.698: 94.9547% ( 27) 00:11:22.113 11200.698 - 11260.276: 95.1227% ( 23) 00:11:22.113 11260.276 - 11319.855: 95.3052% ( 25) 00:11:22.113 11319.855 - 11379.433: 95.5023% ( 27) 00:11:22.113 11379.433 - 11439.011: 95.6922% ( 26) 00:11:22.113 11439.011 - 11498.589: 95.8674% ( 24) 00:11:22.113 11498.589 - 11558.167: 96.0426% ( 24) 00:11:22.113 11558.167 - 11617.745: 96.2106% ( 23) 00:11:22.113 11617.745 - 11677.324: 96.4077% ( 27) 00:11:22.113 11677.324 - 11736.902: 96.5902% ( 25) 00:11:22.113 11736.902 - 11796.480: 96.7582% ( 23) 00:11:22.113 11796.480 - 11856.058: 96.9626% ( 28) 00:11:22.113 11856.058 - 11915.636: 97.1086% ( 20) 00:11:22.113 11915.636 - 11975.215: 97.2620% ( 21) 00:11:22.113 11975.215 - 12034.793: 97.3496% ( 12) 00:11:22.113 12034.793 - 12094.371: 97.4518% ( 14) 00:11:22.113 12094.371 - 12153.949: 97.5467% ( 13) 00:11:22.113 12153.949 - 12213.527: 97.6124% ( 9) 00:11:22.113 12213.527 - 12273.105: 97.6855% ( 10) 00:11:22.113 12273.105 - 12332.684: 97.7512% ( 9) 00:11:22.113 12332.684 - 12392.262: 97.8315% ( 11) 00:11:22.113 12392.262 - 12451.840: 97.8972% ( 9) 00:11:22.113 12451.840 - 12511.418: 97.9556% ( 8) 00:11:22.113 12511.418 - 12570.996: 98.0286% ( 10) 00:11:22.113 12570.996 - 12630.575: 98.0943% ( 9) 00:11:22.113 12630.575 - 12690.153: 98.1527% ( 8) 00:11:22.113 12690.153 - 12749.731: 98.2039% ( 7) 00:11:22.113 12749.731 - 12809.309: 98.2623% ( 8) 00:11:22.113 12809.309 - 12868.887: 98.3499% ( 12) 00:11:22.113 12868.887 - 12928.465: 98.4229% ( 10) 00:11:22.113 12928.465 - 12988.044: 98.5105% ( 12) 00:11:22.113 12988.044 - 13047.622: 98.5908% ( 11) 00:11:22.113 13047.622 - 13107.200: 98.6565% ( 9) 00:11:22.113 13107.200 - 13166.778: 98.7077% ( 7) 00:11:22.113 13166.778 - 13226.356: 98.7442% ( 5) 00:11:22.113 13226.356 - 13285.935: 98.7734% ( 4) 00:11:22.113 13285.935 - 13345.513: 98.7880% ( 2) 00:11:22.113 13345.513 - 13405.091: 98.8099% ( 3) 00:11:22.113 13405.091 - 13464.669: 98.8318% ( 3) 00:11:22.113 13464.669 - 13524.247: 98.8537% ( 3) 00:11:22.113 13524.247 - 13583.825: 98.8683% ( 2) 00:11:22.113 13583.825 - 13643.404: 98.8902% ( 3) 00:11:22.113 13643.404 - 13702.982: 98.9121% ( 3) 00:11:22.113 13702.982 - 13762.560: 98.9267% ( 2) 00:11:22.113 13762.560 - 13822.138: 98.9486% ( 3) 00:11:22.113 13822.138 - 13881.716: 98.9705% ( 3) 00:11:22.113 13881.716 - 13941.295: 98.9924% ( 3) 00:11:22.113 13941.295 - 14000.873: 99.0143% ( 3) 00:11:22.113 14000.873 - 14060.451: 99.0362% ( 3) 00:11:22.113 14060.451 - 14120.029: 99.0581% ( 3) 00:11:22.113 14120.029 - 14179.607: 99.0654% ( 1) 00:11:22.113 22997.178 - 23116.335: 99.0727% ( 1) 00:11:22.113 23116.335 - 23235.491: 99.0946% ( 3) 00:11:22.113 23235.491 - 23354.647: 99.1165% ( 3) 00:11:22.113 23354.647 - 23473.804: 99.1457% ( 4) 00:11:22.113 23473.804 - 23592.960: 99.1676% ( 3) 00:11:22.113 23592.960 - 23712.116: 99.1895% ( 3) 00:11:22.113 23712.116 - 23831.273: 99.2114% ( 3) 00:11:22.113 23831.273 - 23950.429: 99.2407% ( 4) 00:11:22.113 23950.429 - 24069.585: 99.2626% ( 3) 00:11:22.113 24069.585 - 24188.742: 99.2845% ( 3) 00:11:22.113 24188.742 - 24307.898: 99.3064% ( 3) 00:11:22.113 24307.898 - 24427.055: 99.3283% ( 3) 00:11:22.113 24427.055 - 24546.211: 99.3575% ( 4) 00:11:22.113 24546.211 - 24665.367: 99.3794% ( 3) 00:11:22.113 24665.367 - 24784.524: 99.4013% ( 3) 00:11:22.113 24784.524 - 24903.680: 99.4232% ( 3) 00:11:22.113 24903.680 - 25022.836: 99.4524% ( 4) 00:11:22.113 25022.836 - 25141.993: 99.4743% ( 3) 00:11:22.113 25141.993 - 25261.149: 99.5035% ( 4) 00:11:22.113 25261.149 - 25380.305: 99.5254% ( 3) 00:11:22.113 25380.305 - 25499.462: 99.5327% ( 1) 00:11:22.113 29550.778 - 29669.935: 99.5546% ( 3) 00:11:22.113 29669.935 - 29789.091: 99.5838% ( 4) 00:11:22.113 29789.091 - 29908.247: 99.6057% ( 3) 00:11:22.113 29908.247 - 30027.404: 99.6276% ( 3) 00:11:22.113 30027.404 - 30146.560: 99.6495% ( 3) 00:11:22.113 30146.560 - 30265.716: 99.6714% ( 3) 00:11:22.113 30265.716 - 30384.873: 99.7006% ( 4) 00:11:22.113 30384.873 - 30504.029: 99.7225% ( 3) 00:11:22.113 30504.029 - 30742.342: 99.7737% ( 7) 00:11:22.113 30742.342 - 30980.655: 99.8175% ( 6) 00:11:22.113 30980.655 - 31218.967: 99.8686% ( 7) 00:11:22.113 31218.967 - 31457.280: 99.9124% ( 6) 00:11:22.113 31457.280 - 31695.593: 99.9635% ( 7) 00:11:22.113 31695.593 - 31933.905: 100.0000% ( 5) 00:11:22.113 00:11:22.113 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:22.113 ============================================================================== 00:11:22.113 Range in us Cumulative IO count 00:11:22.113 7626.007 - 7685.585: 0.0219% ( 3) 00:11:22.113 7685.585 - 7745.164: 0.0803% ( 8) 00:11:22.113 7745.164 - 7804.742: 0.3213% ( 33) 00:11:22.113 7804.742 - 7864.320: 0.8762% ( 76) 00:11:22.113 7864.320 - 7923.898: 1.7158% ( 115) 00:11:22.113 7923.898 - 7983.476: 2.9717% ( 172) 00:11:22.113 7983.476 - 8043.055: 4.5415% ( 215) 00:11:22.113 8043.055 - 8102.633: 6.6370% ( 287) 00:11:22.113 8102.633 - 8162.211: 8.8055% ( 297) 00:11:22.113 8162.211 - 8221.789: 11.3537% ( 349) 00:11:22.113 8221.789 - 8281.367: 13.9749% ( 359) 00:11:22.113 8281.367 - 8340.945: 16.6983% ( 373) 00:11:22.113 8340.945 - 8400.524: 19.6700% ( 407) 00:11:22.113 8400.524 - 8460.102: 22.6562% ( 409) 00:11:22.113 8460.102 - 8519.680: 25.7812% ( 428) 00:11:22.114 8519.680 - 8579.258: 29.0231% ( 444) 00:11:22.114 8579.258 - 8638.836: 32.3379% ( 454) 00:11:22.114 8638.836 - 8698.415: 35.7258% ( 464) 00:11:22.114 8698.415 - 8757.993: 39.0844% ( 460) 00:11:22.114 8757.993 - 8817.571: 42.3262% ( 444) 00:11:22.114 8817.571 - 8877.149: 45.5680% ( 444) 00:11:22.114 8877.149 - 8936.727: 48.8537% ( 450) 00:11:22.114 8936.727 - 8996.305: 52.1466% ( 451) 00:11:22.114 8996.305 - 9055.884: 55.3665% ( 441) 00:11:22.114 9055.884 - 9115.462: 58.1995% ( 388) 00:11:22.114 9115.462 - 9175.040: 60.8353% ( 361) 00:11:22.114 9175.040 - 9234.618: 63.1352% ( 315) 00:11:22.114 9234.618 - 9294.196: 65.2891% ( 295) 00:11:22.114 9294.196 - 9353.775: 67.3335% ( 280) 00:11:22.114 9353.775 - 9413.353: 69.1881% ( 254) 00:11:22.114 9413.353 - 9472.931: 71.1011% ( 262) 00:11:22.114 9472.931 - 9532.509: 72.9410% ( 252) 00:11:22.114 9532.509 - 9592.087: 74.7591% ( 249) 00:11:22.114 9592.087 - 9651.665: 76.5917% ( 251) 00:11:22.114 9651.665 - 9711.244: 78.3586% ( 242) 00:11:22.114 9711.244 - 9770.822: 80.0015% ( 225) 00:11:22.114 9770.822 - 9830.400: 81.6735% ( 229) 00:11:22.114 9830.400 - 9889.978: 83.2141% ( 211) 00:11:22.114 9889.978 - 9949.556: 84.6086% ( 191) 00:11:22.114 9949.556 - 10009.135: 85.9229% ( 180) 00:11:22.114 10009.135 - 10068.713: 87.0765% ( 158) 00:11:22.114 10068.713 - 10128.291: 88.0476% ( 133) 00:11:22.114 10128.291 - 10187.869: 88.7266% ( 93) 00:11:22.114 10187.869 - 10247.447: 89.3838% ( 90) 00:11:22.114 10247.447 - 10307.025: 90.0263% ( 88) 00:11:22.114 10307.025 - 10366.604: 90.6104% ( 80) 00:11:22.114 10366.604 - 10426.182: 91.1580% ( 75) 00:11:22.114 10426.182 - 10485.760: 91.6399% ( 66) 00:11:22.114 10485.760 - 10545.338: 92.0707% ( 59) 00:11:22.114 10545.338 - 10604.916: 92.4650% ( 54) 00:11:22.114 10604.916 - 10664.495: 92.8519% ( 53) 00:11:22.114 10664.495 - 10724.073: 93.2024% ( 48) 00:11:22.114 10724.073 - 10783.651: 93.4945% ( 40) 00:11:22.114 10783.651 - 10843.229: 93.7427% ( 34) 00:11:22.114 10843.229 - 10902.807: 94.0055% ( 36) 00:11:22.114 10902.807 - 10962.385: 94.2246% ( 30) 00:11:22.114 10962.385 - 11021.964: 94.4436% ( 30) 00:11:22.114 11021.964 - 11081.542: 94.6554% ( 29) 00:11:22.114 11081.542 - 11141.120: 94.9109% ( 35) 00:11:22.114 11141.120 - 11200.698: 95.1081% ( 27) 00:11:22.114 11200.698 - 11260.276: 95.2979% ( 26) 00:11:22.114 11260.276 - 11319.855: 95.4950% ( 27) 00:11:22.114 11319.855 - 11379.433: 95.7141% ( 30) 00:11:22.114 11379.433 - 11439.011: 95.8528% ( 19) 00:11:22.114 11439.011 - 11498.589: 95.9842% ( 18) 00:11:22.114 11498.589 - 11558.167: 96.1230% ( 19) 00:11:22.114 11558.167 - 11617.745: 96.2398% ( 16) 00:11:22.114 11617.745 - 11677.324: 96.3712% ( 18) 00:11:22.114 11677.324 - 11736.902: 96.5026% ( 18) 00:11:22.114 11736.902 - 11796.480: 96.6121% ( 15) 00:11:22.114 11796.480 - 11856.058: 96.7436% ( 18) 00:11:22.114 11856.058 - 11915.636: 96.9042% ( 22) 00:11:22.114 11915.636 - 11975.215: 97.0502% ( 20) 00:11:22.114 11975.215 - 12034.793: 97.1817% ( 18) 00:11:22.114 12034.793 - 12094.371: 97.2839% ( 14) 00:11:22.114 12094.371 - 12153.949: 97.4007% ( 16) 00:11:22.114 12153.949 - 12213.527: 97.4956% ( 13) 00:11:22.114 12213.527 - 12273.105: 97.5832% ( 12) 00:11:22.114 12273.105 - 12332.684: 97.6636% ( 11) 00:11:22.114 12332.684 - 12392.262: 97.7439% ( 11) 00:11:22.114 12392.262 - 12451.840: 97.8242% ( 11) 00:11:22.114 12451.840 - 12511.418: 97.8972% ( 10) 00:11:22.114 12511.418 - 12570.996: 97.9848% ( 12) 00:11:22.114 12570.996 - 12630.575: 98.0724% ( 12) 00:11:22.114 12630.575 - 12690.153: 98.1527% ( 11) 00:11:22.114 12690.153 - 12749.731: 98.2185% ( 9) 00:11:22.114 12749.731 - 12809.309: 98.2769% ( 8) 00:11:22.114 12809.309 - 12868.887: 98.3353% ( 8) 00:11:22.114 12868.887 - 12928.465: 98.4010% ( 9) 00:11:22.114 12928.465 - 12988.044: 98.4667% ( 9) 00:11:22.114 12988.044 - 13047.622: 98.5324% ( 9) 00:11:22.114 13047.622 - 13107.200: 98.5981% ( 9) 00:11:22.114 13107.200 - 13166.778: 98.6565% ( 8) 00:11:22.114 13166.778 - 13226.356: 98.7223% ( 9) 00:11:22.114 13226.356 - 13285.935: 98.7734% ( 7) 00:11:22.114 13285.935 - 13345.513: 98.8099% ( 5) 00:11:22.114 13345.513 - 13405.091: 98.8464% ( 5) 00:11:22.114 13405.091 - 13464.669: 98.8829% ( 5) 00:11:22.114 13464.669 - 13524.247: 98.9048% ( 3) 00:11:22.114 13524.247 - 13583.825: 98.9267% ( 3) 00:11:22.114 13583.825 - 13643.404: 98.9486% ( 3) 00:11:22.114 13643.404 - 13702.982: 98.9705% ( 3) 00:11:22.114 13702.982 - 13762.560: 98.9851% ( 2) 00:11:22.114 13762.560 - 13822.138: 98.9997% ( 2) 00:11:22.114 13822.138 - 13881.716: 99.0216% ( 3) 00:11:22.114 13881.716 - 13941.295: 99.0435% ( 3) 00:11:22.114 13941.295 - 14000.873: 99.0581% ( 2) 00:11:22.114 14000.873 - 14060.451: 99.0654% ( 1) 00:11:22.114 20256.582 - 20375.738: 99.0873% ( 3) 00:11:22.114 20375.738 - 20494.895: 99.1165% ( 4) 00:11:22.114 20494.895 - 20614.051: 99.1384% ( 3) 00:11:22.114 20614.051 - 20733.207: 99.1603% ( 3) 00:11:22.114 20733.207 - 20852.364: 99.1822% ( 3) 00:11:22.114 20852.364 - 20971.520: 99.2041% ( 3) 00:11:22.114 20971.520 - 21090.676: 99.2334% ( 4) 00:11:22.114 21090.676 - 21209.833: 99.2553% ( 3) 00:11:22.114 21209.833 - 21328.989: 99.2772% ( 3) 00:11:22.114 21328.989 - 21448.145: 99.2991% ( 3) 00:11:22.114 21448.145 - 21567.302: 99.3283% ( 4) 00:11:22.114 21567.302 - 21686.458: 99.3502% ( 3) 00:11:22.114 21686.458 - 21805.615: 99.3721% ( 3) 00:11:22.114 21805.615 - 21924.771: 99.3940% ( 3) 00:11:22.114 21924.771 - 22043.927: 99.4232% ( 4) 00:11:22.114 22043.927 - 22163.084: 99.4451% ( 3) 00:11:22.114 22163.084 - 22282.240: 99.4597% ( 2) 00:11:22.114 22282.240 - 22401.396: 99.4889% ( 4) 00:11:22.114 22401.396 - 22520.553: 99.5108% ( 3) 00:11:22.114 22520.553 - 22639.709: 99.5327% ( 3) 00:11:22.114 26691.025 - 26810.182: 99.5473% ( 2) 00:11:22.114 26810.182 - 26929.338: 99.5692% ( 3) 00:11:22.114 26929.338 - 27048.495: 99.5911% ( 3) 00:11:22.114 27048.495 - 27167.651: 99.6203% ( 4) 00:11:22.114 27167.651 - 27286.807: 99.6422% ( 3) 00:11:22.114 27286.807 - 27405.964: 99.6641% ( 3) 00:11:22.114 27405.964 - 27525.120: 99.6860% ( 3) 00:11:22.114 27525.120 - 27644.276: 99.7152% ( 4) 00:11:22.114 27644.276 - 27763.433: 99.7371% ( 3) 00:11:22.114 27763.433 - 27882.589: 99.7591% ( 3) 00:11:22.114 27882.589 - 28001.745: 99.7810% ( 3) 00:11:22.114 28001.745 - 28120.902: 99.8102% ( 4) 00:11:22.114 28120.902 - 28240.058: 99.8321% ( 3) 00:11:22.114 28240.058 - 28359.215: 99.8540% ( 3) 00:11:22.114 28359.215 - 28478.371: 99.8832% ( 4) 00:11:22.114 28478.371 - 28597.527: 99.9051% ( 3) 00:11:22.114 28597.527 - 28716.684: 99.9270% ( 3) 00:11:22.114 28716.684 - 28835.840: 99.9416% ( 2) 00:11:22.114 28835.840 - 28954.996: 99.9635% ( 3) 00:11:22.114 28954.996 - 29074.153: 99.9927% ( 4) 00:11:22.114 29074.153 - 29193.309: 100.0000% ( 1) 00:11:22.114 00:11:22.114 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:22.114 ============================================================================== 00:11:22.114 Range in us Cumulative IO count 00:11:22.114 7626.007 - 7685.585: 0.0146% ( 2) 00:11:22.114 7685.585 - 7745.164: 0.0511% ( 5) 00:11:22.114 7745.164 - 7804.742: 0.2848% ( 32) 00:11:22.114 7804.742 - 7864.320: 0.8251% ( 74) 00:11:22.114 7864.320 - 7923.898: 1.6939% ( 119) 00:11:22.114 7923.898 - 7983.476: 2.9279% ( 169) 00:11:22.114 7983.476 - 8043.055: 4.4831% ( 213) 00:11:22.114 8043.055 - 8102.633: 6.4106% ( 264) 00:11:22.114 8102.633 - 8162.211: 8.7471% ( 320) 00:11:22.114 8162.211 - 8221.789: 11.3245% ( 353) 00:11:22.114 8221.789 - 8281.367: 13.9968% ( 366) 00:11:22.114 8281.367 - 8340.945: 16.8297% ( 388) 00:11:22.114 8340.945 - 8400.524: 19.7795% ( 404) 00:11:22.114 8400.524 - 8460.102: 22.6928% ( 399) 00:11:22.114 8460.102 - 8519.680: 25.7666% ( 421) 00:11:22.114 8519.680 - 8579.258: 28.9574% ( 437) 00:11:22.114 8579.258 - 8638.836: 32.1335% ( 435) 00:11:22.114 8638.836 - 8698.415: 35.5432% ( 467) 00:11:22.114 8698.415 - 8757.993: 38.9019% ( 460) 00:11:22.114 8757.993 - 8817.571: 42.1948% ( 451) 00:11:22.114 8817.571 - 8877.149: 45.3490% ( 432) 00:11:22.114 8877.149 - 8936.727: 48.5908% ( 444) 00:11:22.114 8936.727 - 8996.305: 51.8765% ( 450) 00:11:22.114 8996.305 - 9055.884: 55.0453% ( 434) 00:11:22.114 9055.884 - 9115.462: 57.8052% ( 378) 00:11:22.114 9115.462 - 9175.040: 60.4264% ( 359) 00:11:22.114 9175.040 - 9234.618: 62.7117% ( 313) 00:11:22.114 9234.618 - 9294.196: 64.8584% ( 294) 00:11:22.114 9294.196 - 9353.775: 66.9393% ( 285) 00:11:22.114 9353.775 - 9413.353: 68.8303% ( 259) 00:11:22.114 9413.353 - 9472.931: 70.8090% ( 271) 00:11:22.114 9472.931 - 9532.509: 72.7001% ( 259) 00:11:22.114 9532.509 - 9592.087: 74.5254% ( 250) 00:11:22.114 9592.087 - 9651.665: 76.2996% ( 243) 00:11:22.114 9651.665 - 9711.244: 78.0812% ( 244) 00:11:22.114 9711.244 - 9770.822: 79.8554% ( 243) 00:11:22.114 9770.822 - 9830.400: 81.5786% ( 236) 00:11:22.114 9830.400 - 9889.978: 83.1411% ( 214) 00:11:22.114 9889.978 - 9949.556: 84.5867% ( 198) 00:11:22.114 9949.556 - 10009.135: 85.8791% ( 177) 00:11:22.114 10009.135 - 10068.713: 87.0327% ( 158) 00:11:22.114 10068.713 - 10128.291: 87.9600% ( 127) 00:11:22.114 10128.291 - 10187.869: 88.7704% ( 111) 00:11:22.114 10187.869 - 10247.447: 89.4422% ( 92) 00:11:22.114 10247.447 - 10307.025: 90.1285% ( 94) 00:11:22.114 10307.025 - 10366.604: 90.6980% ( 78) 00:11:22.115 10366.604 - 10426.182: 91.2967% ( 82) 00:11:22.115 10426.182 - 10485.760: 91.7348% ( 60) 00:11:22.115 10485.760 - 10545.338: 92.1437% ( 56) 00:11:22.115 10545.338 - 10604.916: 92.5088% ( 50) 00:11:22.115 10604.916 - 10664.495: 92.8519% ( 47) 00:11:22.115 10664.495 - 10724.073: 93.1951% ( 47) 00:11:22.115 10724.073 - 10783.651: 93.5164% ( 44) 00:11:22.115 10783.651 - 10843.229: 93.8376% ( 44) 00:11:22.115 10843.229 - 10902.807: 94.1516% ( 43) 00:11:22.115 10902.807 - 10962.385: 94.4436% ( 40) 00:11:22.115 10962.385 - 11021.964: 94.6846% ( 33) 00:11:22.115 11021.964 - 11081.542: 94.9036% ( 30) 00:11:22.115 11081.542 - 11141.120: 95.0789% ( 24) 00:11:22.115 11141.120 - 11200.698: 95.2395% ( 22) 00:11:22.115 11200.698 - 11260.276: 95.3709% ( 18) 00:11:22.115 11260.276 - 11319.855: 95.4950% ( 17) 00:11:22.115 11319.855 - 11379.433: 95.6265% ( 18) 00:11:22.115 11379.433 - 11439.011: 95.7579% ( 18) 00:11:22.115 11439.011 - 11498.589: 95.8747% ( 16) 00:11:22.115 11498.589 - 11558.167: 96.0061% ( 18) 00:11:22.115 11558.167 - 11617.745: 96.1595% ( 21) 00:11:22.115 11617.745 - 11677.324: 96.3201% ( 22) 00:11:22.115 11677.324 - 11736.902: 96.4953% ( 24) 00:11:22.115 11736.902 - 11796.480: 96.6487% ( 21) 00:11:22.115 11796.480 - 11856.058: 96.8239% ( 24) 00:11:22.115 11856.058 - 11915.636: 96.9626% ( 19) 00:11:22.115 11915.636 - 11975.215: 97.0940% ( 18) 00:11:22.115 11975.215 - 12034.793: 97.2328% ( 19) 00:11:22.115 12034.793 - 12094.371: 97.3496% ( 16) 00:11:22.115 12094.371 - 12153.949: 97.4883% ( 19) 00:11:22.115 12153.949 - 12213.527: 97.6489% ( 22) 00:11:22.115 12213.527 - 12273.105: 97.7658% ( 16) 00:11:22.115 12273.105 - 12332.684: 97.8607% ( 13) 00:11:22.115 12332.684 - 12392.262: 97.9264% ( 9) 00:11:22.115 12392.262 - 12451.840: 97.9921% ( 9) 00:11:22.115 12451.840 - 12511.418: 98.0724% ( 11) 00:11:22.115 12511.418 - 12570.996: 98.1527% ( 11) 00:11:22.115 12570.996 - 12630.575: 98.2331% ( 11) 00:11:22.115 12630.575 - 12690.153: 98.3134% ( 11) 00:11:22.115 12690.153 - 12749.731: 98.3791% ( 9) 00:11:22.115 12749.731 - 12809.309: 98.4229% ( 6) 00:11:22.115 12809.309 - 12868.887: 98.4594% ( 5) 00:11:22.115 12868.887 - 12928.465: 98.5032% ( 6) 00:11:22.115 12928.465 - 12988.044: 98.5397% ( 5) 00:11:22.115 12988.044 - 13047.622: 98.5835% ( 6) 00:11:22.115 13047.622 - 13107.200: 98.6200% ( 5) 00:11:22.115 13107.200 - 13166.778: 98.6565% ( 5) 00:11:22.115 13166.778 - 13226.356: 98.6930% ( 5) 00:11:22.115 13226.356 - 13285.935: 98.7296% ( 5) 00:11:22.115 13285.935 - 13345.513: 98.7661% ( 5) 00:11:22.115 13345.513 - 13405.091: 98.8099% ( 6) 00:11:22.115 13405.091 - 13464.669: 98.8391% ( 4) 00:11:22.115 13464.669 - 13524.247: 98.8756% ( 5) 00:11:22.115 13524.247 - 13583.825: 98.9194% ( 6) 00:11:22.115 13583.825 - 13643.404: 98.9559% ( 5) 00:11:22.115 13643.404 - 13702.982: 98.9924% ( 5) 00:11:22.115 13702.982 - 13762.560: 99.0362% ( 6) 00:11:22.115 13762.560 - 13822.138: 99.0581% ( 3) 00:11:22.115 13822.138 - 13881.716: 99.0654% ( 1) 00:11:22.115 17396.829 - 17515.985: 99.0800% ( 2) 00:11:22.115 17515.985 - 17635.142: 99.1019% ( 3) 00:11:22.115 17635.142 - 17754.298: 99.1238% ( 3) 00:11:22.115 17754.298 - 17873.455: 99.1457% ( 3) 00:11:22.115 17873.455 - 17992.611: 99.1676% ( 3) 00:11:22.115 17992.611 - 18111.767: 99.1968% ( 4) 00:11:22.115 18111.767 - 18230.924: 99.2188% ( 3) 00:11:22.115 18230.924 - 18350.080: 99.2407% ( 3) 00:11:22.115 18350.080 - 18469.236: 99.2699% ( 4) 00:11:22.115 18469.236 - 18588.393: 99.2918% ( 3) 00:11:22.115 18588.393 - 18707.549: 99.3137% ( 3) 00:11:22.115 18707.549 - 18826.705: 99.3429% ( 4) 00:11:22.115 18826.705 - 18945.862: 99.3648% ( 3) 00:11:22.115 18945.862 - 19065.018: 99.3867% ( 3) 00:11:22.115 19065.018 - 19184.175: 99.4086% ( 3) 00:11:22.115 19184.175 - 19303.331: 99.4378% ( 4) 00:11:22.115 19303.331 - 19422.487: 99.4597% ( 3) 00:11:22.115 19422.487 - 19541.644: 99.4889% ( 4) 00:11:22.115 19541.644 - 19660.800: 99.5108% ( 3) 00:11:22.115 19660.800 - 19779.956: 99.5327% ( 3) 00:11:22.115 23831.273 - 23950.429: 99.5400% ( 1) 00:11:22.115 23950.429 - 24069.585: 99.5546% ( 2) 00:11:22.115 24069.585 - 24188.742: 99.5765% ( 3) 00:11:22.115 24188.742 - 24307.898: 99.6057% ( 4) 00:11:22.115 24307.898 - 24427.055: 99.6276% ( 3) 00:11:22.115 24427.055 - 24546.211: 99.6495% ( 3) 00:11:22.115 24546.211 - 24665.367: 99.6714% ( 3) 00:11:22.115 24665.367 - 24784.524: 99.7006% ( 4) 00:11:22.115 24784.524 - 24903.680: 99.7225% ( 3) 00:11:22.115 24903.680 - 25022.836: 99.7445% ( 3) 00:11:22.115 25022.836 - 25141.993: 99.7664% ( 3) 00:11:22.115 25141.993 - 25261.149: 99.7883% ( 3) 00:11:22.115 25261.149 - 25380.305: 99.8102% ( 3) 00:11:22.115 25380.305 - 25499.462: 99.8321% ( 3) 00:11:22.115 25499.462 - 25618.618: 99.8540% ( 3) 00:11:22.115 25618.618 - 25737.775: 99.8832% ( 4) 00:11:22.115 25737.775 - 25856.931: 99.9051% ( 3) 00:11:22.115 25856.931 - 25976.087: 99.9270% ( 3) 00:11:22.115 25976.087 - 26095.244: 99.9562% ( 4) 00:11:22.115 26095.244 - 26214.400: 99.9781% ( 3) 00:11:22.115 26214.400 - 26333.556: 100.0000% ( 3) 00:11:22.115 00:11:22.115 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:22.115 ============================================================================== 00:11:22.115 Range in us Cumulative IO count 00:11:22.115 7626.007 - 7685.585: 0.0146% ( 2) 00:11:22.115 7685.585 - 7745.164: 0.0584% ( 6) 00:11:22.115 7745.164 - 7804.742: 0.3432% ( 39) 00:11:22.115 7804.742 - 7864.320: 0.8470% ( 69) 00:11:22.115 7864.320 - 7923.898: 1.7231% ( 120) 00:11:22.115 7923.898 - 7983.476: 2.9352% ( 166) 00:11:22.115 7983.476 - 8043.055: 4.5269% ( 218) 00:11:22.115 8043.055 - 8102.633: 6.4763% ( 267) 00:11:22.115 8102.633 - 8162.211: 8.8128% ( 320) 00:11:22.115 8162.211 - 8221.789: 11.2953% ( 340) 00:11:22.115 8221.789 - 8281.367: 14.0041% ( 371) 00:11:22.115 8281.367 - 8340.945: 16.7421% ( 375) 00:11:22.115 8340.945 - 8400.524: 19.6189% ( 394) 00:11:22.115 8400.524 - 8460.102: 22.7001% ( 422) 00:11:22.115 8460.102 - 8519.680: 25.7374% ( 416) 00:11:22.115 8519.680 - 8579.258: 28.9428% ( 439) 00:11:22.115 8579.258 - 8638.836: 32.1335% ( 437) 00:11:22.115 8638.836 - 8698.415: 35.4045% ( 448) 00:11:22.115 8698.415 - 8757.993: 38.6901% ( 450) 00:11:22.115 8757.993 - 8817.571: 41.9904% ( 452) 00:11:22.115 8817.571 - 8877.149: 45.2468% ( 446) 00:11:22.115 8877.149 - 8936.727: 48.5324% ( 450) 00:11:22.115 8936.727 - 8996.305: 51.8107% ( 449) 00:11:22.115 8996.305 - 9055.884: 54.9650% ( 432) 00:11:22.115 9055.884 - 9115.462: 57.8198% ( 391) 00:11:22.115 9115.462 - 9175.040: 60.4483% ( 360) 00:11:22.115 9175.040 - 9234.618: 62.9454% ( 342) 00:11:22.115 9234.618 - 9294.196: 65.1285% ( 299) 00:11:22.115 9294.196 - 9353.775: 67.1729% ( 280) 00:11:22.115 9353.775 - 9413.353: 69.0567% ( 258) 00:11:22.115 9413.353 - 9472.931: 70.8893% ( 251) 00:11:22.115 9472.931 - 9532.509: 72.7731% ( 258) 00:11:22.115 9532.509 - 9592.087: 74.5984% ( 250) 00:11:22.115 9592.087 - 9651.665: 76.4019% ( 247) 00:11:22.115 9651.665 - 9711.244: 78.1542% ( 240) 00:11:22.115 9711.244 - 9770.822: 79.9065% ( 240) 00:11:22.115 9770.822 - 9830.400: 81.5640% ( 227) 00:11:22.115 9830.400 - 9889.978: 83.0973% ( 210) 00:11:22.115 9889.978 - 9949.556: 84.4845% ( 190) 00:11:22.115 9949.556 - 10009.135: 85.7842% ( 178) 00:11:22.115 10009.135 - 10068.713: 86.9378% ( 158) 00:11:22.115 10068.713 - 10128.291: 87.8505% ( 125) 00:11:22.115 10128.291 - 10187.869: 88.6390% ( 108) 00:11:22.115 10187.869 - 10247.447: 89.3619% ( 99) 00:11:22.115 10247.447 - 10307.025: 90.0190% ( 90) 00:11:22.115 10307.025 - 10366.604: 90.5739% ( 76) 00:11:22.115 10366.604 - 10426.182: 91.0777% ( 69) 00:11:22.115 10426.182 - 10485.760: 91.5158% ( 60) 00:11:22.115 10485.760 - 10545.338: 91.9246% ( 56) 00:11:22.115 10545.338 - 10604.916: 92.3262% ( 55) 00:11:22.115 10604.916 - 10664.495: 92.7059% ( 52) 00:11:22.116 10664.495 - 10724.073: 93.0418% ( 46) 00:11:22.116 10724.073 - 10783.651: 93.3119% ( 37) 00:11:22.116 10783.651 - 10843.229: 93.5748% ( 36) 00:11:22.116 10843.229 - 10902.807: 93.8522% ( 38) 00:11:22.116 10902.807 - 10962.385: 94.1224% ( 37) 00:11:22.116 10962.385 - 11021.964: 94.3706% ( 34) 00:11:22.116 11021.964 - 11081.542: 94.6043% ( 32) 00:11:22.116 11081.542 - 11141.120: 94.8379% ( 32) 00:11:22.116 11141.120 - 11200.698: 95.0862% ( 34) 00:11:22.116 11200.698 - 11260.276: 95.2833% ( 27) 00:11:22.116 11260.276 - 11319.855: 95.4731% ( 26) 00:11:22.116 11319.855 - 11379.433: 95.6557% ( 25) 00:11:22.116 11379.433 - 11439.011: 95.8382% ( 25) 00:11:22.116 11439.011 - 11498.589: 96.0134% ( 24) 00:11:22.116 11498.589 - 11558.167: 96.1814% ( 23) 00:11:22.116 11558.167 - 11617.745: 96.3493% ( 23) 00:11:22.116 11617.745 - 11677.324: 96.5391% ( 26) 00:11:22.116 11677.324 - 11736.902: 96.7217% ( 25) 00:11:22.116 11736.902 - 11796.480: 96.8969% ( 24) 00:11:22.116 11796.480 - 11856.058: 97.0721% ( 24) 00:11:22.116 11856.058 - 11915.636: 97.2255% ( 21) 00:11:22.116 11915.636 - 11975.215: 97.3569% ( 18) 00:11:22.116 11975.215 - 12034.793: 97.4664% ( 15) 00:11:22.116 12034.793 - 12094.371: 97.5686% ( 14) 00:11:22.116 12094.371 - 12153.949: 97.6489% ( 11) 00:11:22.116 12153.949 - 12213.527: 97.7293% ( 11) 00:11:22.116 12213.527 - 12273.105: 97.8169% ( 12) 00:11:22.116 12273.105 - 12332.684: 97.8899% ( 10) 00:11:22.116 12332.684 - 12392.262: 97.9629% ( 10) 00:11:22.116 12392.262 - 12451.840: 98.0359% ( 10) 00:11:22.116 12451.840 - 12511.418: 98.0943% ( 8) 00:11:22.116 12511.418 - 12570.996: 98.1527% ( 8) 00:11:22.116 12570.996 - 12630.575: 98.2258% ( 10) 00:11:22.116 12630.575 - 12690.153: 98.2696% ( 6) 00:11:22.116 12690.153 - 12749.731: 98.3134% ( 6) 00:11:22.116 12749.731 - 12809.309: 98.3499% ( 5) 00:11:22.116 12809.309 - 12868.887: 98.3864% ( 5) 00:11:22.116 12868.887 - 12928.465: 98.4229% ( 5) 00:11:22.116 12928.465 - 12988.044: 98.4594% ( 5) 00:11:22.116 12988.044 - 13047.622: 98.4959% ( 5) 00:11:22.116 13047.622 - 13107.200: 98.5251% ( 4) 00:11:22.116 13107.200 - 13166.778: 98.5616% ( 5) 00:11:22.116 13166.778 - 13226.356: 98.5981% ( 5) 00:11:22.116 13226.356 - 13285.935: 98.6419% ( 6) 00:11:22.116 13285.935 - 13345.513: 98.6784% ( 5) 00:11:22.116 13345.513 - 13405.091: 98.7077% ( 4) 00:11:22.116 13405.091 - 13464.669: 98.7515% ( 6) 00:11:22.116 13464.669 - 13524.247: 98.7880% ( 5) 00:11:22.116 13524.247 - 13583.825: 98.8318% ( 6) 00:11:22.116 13583.825 - 13643.404: 98.8683% ( 5) 00:11:22.116 13643.404 - 13702.982: 98.8902% ( 3) 00:11:22.116 13702.982 - 13762.560: 98.9340% ( 6) 00:11:22.116 13762.560 - 13822.138: 98.9632% ( 4) 00:11:22.116 13822.138 - 13881.716: 98.9924% ( 4) 00:11:22.116 13881.716 - 13941.295: 99.0070% ( 2) 00:11:22.116 13941.295 - 14000.873: 99.0216% ( 2) 00:11:22.116 14000.873 - 14060.451: 99.0362% ( 2) 00:11:22.116 14060.451 - 14120.029: 99.0508% ( 2) 00:11:22.116 14120.029 - 14179.607: 99.0654% ( 2) 00:11:22.116 14596.655 - 14656.233: 99.0727% ( 1) 00:11:22.116 14656.233 - 14715.811: 99.0800% ( 1) 00:11:22.116 14715.811 - 14775.389: 99.0946% ( 2) 00:11:22.116 14775.389 - 14834.967: 99.1092% ( 2) 00:11:22.116 14834.967 - 14894.545: 99.1165% ( 1) 00:11:22.116 14894.545 - 14954.124: 99.1311% ( 2) 00:11:22.116 14954.124 - 15013.702: 99.1384% ( 1) 00:11:22.116 15013.702 - 15073.280: 99.1457% ( 1) 00:11:22.116 15073.280 - 15132.858: 99.1603% ( 2) 00:11:22.116 15132.858 - 15192.436: 99.1749% ( 2) 00:11:22.116 15192.436 - 15252.015: 99.1822% ( 1) 00:11:22.116 15252.015 - 15371.171: 99.2114% ( 4) 00:11:22.116 15371.171 - 15490.327: 99.2334% ( 3) 00:11:22.116 15490.327 - 15609.484: 99.2626% ( 4) 00:11:22.116 15609.484 - 15728.640: 99.2845% ( 3) 00:11:22.116 15728.640 - 15847.796: 99.3064% ( 3) 00:11:22.116 15847.796 - 15966.953: 99.3283% ( 3) 00:11:22.116 15966.953 - 16086.109: 99.3575% ( 4) 00:11:22.116 16086.109 - 16205.265: 99.3794% ( 3) 00:11:22.116 16205.265 - 16324.422: 99.4013% ( 3) 00:11:22.116 16324.422 - 16443.578: 99.4232% ( 3) 00:11:22.116 16443.578 - 16562.735: 99.4524% ( 4) 00:11:22.116 16562.735 - 16681.891: 99.4743% ( 3) 00:11:22.116 16681.891 - 16801.047: 99.4962% ( 3) 00:11:22.116 16801.047 - 16920.204: 99.5254% ( 4) 00:11:22.116 16920.204 - 17039.360: 99.5327% ( 1) 00:11:22.116 21090.676 - 21209.833: 99.5546% ( 3) 00:11:22.116 21209.833 - 21328.989: 99.5765% ( 3) 00:11:22.116 21328.989 - 21448.145: 99.5984% ( 3) 00:11:22.116 21448.145 - 21567.302: 99.6203% ( 3) 00:11:22.116 21567.302 - 21686.458: 99.6422% ( 3) 00:11:22.116 21686.458 - 21805.615: 99.6714% ( 4) 00:11:22.116 21805.615 - 21924.771: 99.6933% ( 3) 00:11:22.116 21924.771 - 22043.927: 99.7152% ( 3) 00:11:22.116 22043.927 - 22163.084: 99.7371% ( 3) 00:11:22.116 22163.084 - 22282.240: 99.7664% ( 4) 00:11:22.116 22282.240 - 22401.396: 99.7883% ( 3) 00:11:22.116 22401.396 - 22520.553: 99.8102% ( 3) 00:11:22.116 22520.553 - 22639.709: 99.8321% ( 3) 00:11:22.116 22639.709 - 22758.865: 99.8540% ( 3) 00:11:22.116 22758.865 - 22878.022: 99.8832% ( 4) 00:11:22.116 22878.022 - 22997.178: 99.9051% ( 3) 00:11:22.116 22997.178 - 23116.335: 99.9343% ( 4) 00:11:22.116 23116.335 - 23235.491: 99.9489% ( 2) 00:11:22.116 23235.491 - 23354.647: 99.9781% ( 4) 00:11:22.116 23354.647 - 23473.804: 100.0000% ( 3) 00:11:22.116 00:11:22.116 08:30:01 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:23.494 Initializing NVMe Controllers 00:11:23.494 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:23.494 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:23.494 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:23.494 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:23.494 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:23.494 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:23.494 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:23.494 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:23.494 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:23.494 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:23.494 Initialization complete. Launching workers. 00:11:23.494 ======================================================== 00:11:23.494 Latency(us) 00:11:23.494 Device Information : IOPS MiB/s Average min max 00:11:23.494 PCIE (0000:00:10.0) NSID 1 from core 0: 9990.81 117.08 12838.44 9500.29 45261.66 00:11:23.494 PCIE (0000:00:11.0) NSID 1 from core 0: 9990.81 117.08 12809.45 9673.75 42644.41 00:11:23.494 PCIE (0000:00:13.0) NSID 1 from core 0: 9990.81 117.08 12783.40 9543.08 40828.49 00:11:23.494 PCIE (0000:00:12.0) NSID 1 from core 0: 9990.81 117.08 12753.40 9585.43 38455.29 00:11:23.494 PCIE (0000:00:12.0) NSID 2 from core 0: 9990.81 117.08 12723.77 9669.73 35873.35 00:11:23.494 PCIE (0000:00:12.0) NSID 3 from core 0: 9990.81 117.08 12694.36 9677.35 33367.16 00:11:23.494 ======================================================== 00:11:23.494 Total : 59944.88 702.48 12767.14 9500.29 45261.66 00:11:23.494 00:11:23.494 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:23.494 ================================================================================= 00:11:23.494 1.00000% : 9830.400us 00:11:23.494 10.00000% : 10604.916us 00:11:23.494 25.00000% : 11260.276us 00:11:23.494 50.00000% : 12273.105us 00:11:23.494 75.00000% : 13881.716us 00:11:23.494 90.00000% : 14715.811us 00:11:23.494 95.00000% : 15192.436us 00:11:23.494 98.00000% : 18469.236us 00:11:23.494 99.00000% : 34555.345us 00:11:23.494 99.50000% : 43134.604us 00:11:23.494 99.90000% : 45041.105us 00:11:23.494 99.99000% : 45279.418us 00:11:23.494 99.99900% : 45279.418us 00:11:23.494 99.99990% : 45279.418us 00:11:23.494 99.99999% : 45279.418us 00:11:23.494 00:11:23.494 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:23.494 ================================================================================= 00:11:23.494 1.00000% : 9889.978us 00:11:23.494 10.00000% : 10604.916us 00:11:23.494 25.00000% : 11260.276us 00:11:23.494 50.00000% : 12213.527us 00:11:23.494 75.00000% : 13941.295us 00:11:23.494 90.00000% : 14656.233us 00:11:23.494 95.00000% : 15073.280us 00:11:23.494 98.00000% : 18469.236us 00:11:23.494 99.00000% : 33125.469us 00:11:23.494 99.50000% : 40751.476us 00:11:23.494 99.90000% : 42419.665us 00:11:23.494 99.99000% : 42657.978us 00:11:23.494 99.99900% : 42657.978us 00:11:23.494 99.99990% : 42657.978us 00:11:23.494 99.99999% : 42657.978us 00:11:23.494 00:11:23.494 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:23.494 ================================================================================= 00:11:23.494 1.00000% : 9949.556us 00:11:23.494 10.00000% : 10604.916us 00:11:23.494 25.00000% : 11200.698us 00:11:23.494 50.00000% : 12273.105us 00:11:23.494 75.00000% : 13881.716us 00:11:23.494 90.00000% : 14656.233us 00:11:23.494 95.00000% : 15132.858us 00:11:23.494 98.00000% : 18111.767us 00:11:23.494 99.00000% : 31457.280us 00:11:23.494 99.50000% : 39083.287us 00:11:23.494 99.90000% : 40513.164us 00:11:23.494 99.99000% : 40989.789us 00:11:23.494 99.99900% : 40989.789us 00:11:23.494 99.99990% : 40989.789us 00:11:23.494 99.99999% : 40989.789us 00:11:23.494 00:11:23.494 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:23.494 ================================================================================= 00:11:23.494 1.00000% : 9949.556us 00:11:23.494 10.00000% : 10604.916us 00:11:23.494 25.00000% : 11260.276us 00:11:23.494 50.00000% : 12273.105us 00:11:23.494 75.00000% : 13881.716us 00:11:23.495 90.00000% : 14656.233us 00:11:23.495 95.00000% : 15073.280us 00:11:23.495 98.00000% : 18230.924us 00:11:23.495 99.00000% : 28716.684us 00:11:23.495 99.50000% : 36461.847us 00:11:23.495 99.90000% : 38130.036us 00:11:23.495 99.99000% : 38606.662us 00:11:23.495 99.99900% : 38606.662us 00:11:23.495 99.99990% : 38606.662us 00:11:23.495 99.99999% : 38606.662us 00:11:23.495 00:11:23.495 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:23.495 ================================================================================= 00:11:23.495 1.00000% : 9949.556us 00:11:23.495 10.00000% : 10604.916us 00:11:23.495 25.00000% : 11260.276us 00:11:23.495 50.00000% : 12273.105us 00:11:23.495 75.00000% : 13881.716us 00:11:23.495 90.00000% : 14656.233us 00:11:23.495 95.00000% : 15073.280us 00:11:23.495 98.00000% : 18230.924us 00:11:23.495 99.00000% : 26333.556us 00:11:23.495 99.50000% : 34078.720us 00:11:23.495 99.90000% : 35508.596us 00:11:23.495 99.99000% : 35985.222us 00:11:23.495 99.99900% : 35985.222us 00:11:23.495 99.99990% : 35985.222us 00:11:23.495 99.99999% : 35985.222us 00:11:23.495 00:11:23.495 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:23.495 ================================================================================= 00:11:23.495 1.00000% : 9949.556us 00:11:23.495 10.00000% : 10604.916us 00:11:23.495 25.00000% : 11260.276us 00:11:23.495 50.00000% : 12273.105us 00:11:23.495 75.00000% : 13881.716us 00:11:23.495 90.00000% : 14656.233us 00:11:23.495 95.00000% : 15073.280us 00:11:23.495 98.00000% : 18350.080us 00:11:23.495 99.00000% : 23592.960us 00:11:23.495 99.50000% : 31457.280us 00:11:23.495 99.90000% : 33125.469us 00:11:23.495 99.99000% : 33363.782us 00:11:23.495 99.99900% : 33602.095us 00:11:23.495 99.99990% : 33602.095us 00:11:23.495 99.99999% : 33602.095us 00:11:23.495 00:11:23.495 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:23.495 ============================================================================== 00:11:23.495 Range in us Cumulative IO count 00:11:23.495 9472.931 - 9532.509: 0.0398% ( 4) 00:11:23.495 9532.509 - 9592.087: 0.0995% ( 6) 00:11:23.495 9592.087 - 9651.665: 0.3284% ( 23) 00:11:23.495 9651.665 - 9711.244: 0.5474% ( 22) 00:11:23.495 9711.244 - 9770.822: 0.8459% ( 30) 00:11:23.495 9770.822 - 9830.400: 1.2042% ( 36) 00:11:23.495 9830.400 - 9889.978: 1.5725% ( 37) 00:11:23.495 9889.978 - 9949.556: 2.1895% ( 62) 00:11:23.495 9949.556 - 10009.135: 2.9956% ( 81) 00:11:23.495 10009.135 - 10068.713: 3.5330% ( 54) 00:11:23.495 10068.713 - 10128.291: 4.2396% ( 71) 00:11:23.495 10128.291 - 10187.869: 5.0259% ( 79) 00:11:23.495 10187.869 - 10247.447: 5.7424% ( 72) 00:11:23.495 10247.447 - 10307.025: 6.4590% ( 72) 00:11:23.495 10307.025 - 10366.604: 7.2253% ( 77) 00:11:23.495 10366.604 - 10426.182: 8.1409% ( 92) 00:11:23.495 10426.182 - 10485.760: 8.8774% ( 74) 00:11:23.495 10485.760 - 10545.338: 9.8826% ( 101) 00:11:23.495 10545.338 - 10604.916: 11.0171% ( 114) 00:11:23.495 10604.916 - 10664.495: 12.1318% ( 112) 00:11:23.495 10664.495 - 10724.073: 13.4654% ( 134) 00:11:23.495 10724.073 - 10783.651: 14.8288% ( 137) 00:11:23.495 10783.651 - 10843.229: 16.1624% ( 134) 00:11:23.495 10843.229 - 10902.807: 17.5756% ( 142) 00:11:23.495 10902.807 - 10962.385: 18.9988% ( 143) 00:11:23.495 10962.385 - 11021.964: 20.4419% ( 145) 00:11:23.495 11021.964 - 11081.542: 21.9944% ( 156) 00:11:23.495 11081.542 - 11141.120: 23.3778% ( 139) 00:11:23.495 11141.120 - 11200.698: 24.9303% ( 156) 00:11:23.495 11200.698 - 11260.276: 26.4928% ( 157) 00:11:23.495 11260.276 - 11319.855: 27.9956% ( 151) 00:11:23.495 11319.855 - 11379.433: 29.3790% ( 139) 00:11:23.495 11379.433 - 11439.011: 30.7225% ( 135) 00:11:23.495 11439.011 - 11498.589: 32.3447% ( 163) 00:11:23.495 11498.589 - 11558.167: 33.6982% ( 136) 00:11:23.495 11558.167 - 11617.745: 35.3503% ( 166) 00:11:23.495 11617.745 - 11677.324: 36.7038% ( 136) 00:11:23.495 11677.324 - 11736.902: 38.0872% ( 139) 00:11:23.495 11736.902 - 11796.480: 39.5701% ( 149) 00:11:23.495 11796.480 - 11856.058: 41.1525% ( 159) 00:11:23.495 11856.058 - 11915.636: 42.7747% ( 163) 00:11:23.495 11915.636 - 11975.215: 44.2874% ( 152) 00:11:23.495 11975.215 - 12034.793: 45.6708% ( 139) 00:11:23.495 12034.793 - 12094.371: 47.0939% ( 143) 00:11:23.495 12094.371 - 12153.949: 48.3380% ( 125) 00:11:23.495 12153.949 - 12213.527: 49.4725% ( 114) 00:11:23.495 12213.527 - 12273.105: 50.6369% ( 117) 00:11:23.495 12273.105 - 12332.684: 51.7516% ( 112) 00:11:23.495 12332.684 - 12392.262: 52.7468% ( 100) 00:11:23.495 12392.262 - 12451.840: 53.8615% ( 112) 00:11:23.495 12451.840 - 12511.418: 54.8666% ( 101) 00:11:23.495 12511.418 - 12570.996: 55.9315% ( 107) 00:11:23.495 12570.996 - 12630.575: 56.9168% ( 99) 00:11:23.495 12630.575 - 12690.153: 57.7428% ( 83) 00:11:23.495 12690.153 - 12749.731: 58.5390% ( 80) 00:11:23.495 12749.731 - 12809.309: 59.2556% ( 72) 00:11:23.495 12809.309 - 12868.887: 59.9025% ( 65) 00:11:23.495 12868.887 - 12928.465: 60.5792% ( 68) 00:11:23.495 12928.465 - 12988.044: 61.1863% ( 61) 00:11:23.495 12988.044 - 13047.622: 61.8631% ( 68) 00:11:23.495 13047.622 - 13107.200: 62.5697% ( 71) 00:11:23.495 13107.200 - 13166.778: 63.3957% ( 83) 00:11:23.495 13166.778 - 13226.356: 64.2615% ( 87) 00:11:23.495 13226.356 - 13285.935: 65.2667% ( 101) 00:11:23.495 13285.935 - 13345.513: 66.2122% ( 95) 00:11:23.495 13345.513 - 13405.091: 67.2273% ( 102) 00:11:23.495 13405.091 - 13464.669: 68.1927% ( 97) 00:11:23.495 13464.669 - 13524.247: 69.1182% ( 93) 00:11:23.495 13524.247 - 13583.825: 70.1234% ( 101) 00:11:23.495 13583.825 - 13643.404: 71.1883% ( 107) 00:11:23.495 13643.404 - 13702.982: 72.2233% ( 104) 00:11:23.495 13702.982 - 13762.560: 73.3579% ( 114) 00:11:23.495 13762.560 - 13822.138: 74.5422% ( 119) 00:11:23.495 13822.138 - 13881.716: 75.6568% ( 112) 00:11:23.495 13881.716 - 13941.295: 76.8611% ( 121) 00:11:23.495 13941.295 - 14000.873: 77.9459% ( 109) 00:11:23.495 14000.873 - 14060.451: 79.0506% ( 111) 00:11:23.495 14060.451 - 14120.029: 80.2448% ( 120) 00:11:23.495 14120.029 - 14179.607: 81.4391% ( 120) 00:11:23.495 14179.607 - 14239.185: 82.3846% ( 95) 00:11:23.495 14239.185 - 14298.764: 83.5092% ( 113) 00:11:23.495 14298.764 - 14358.342: 84.6437% ( 114) 00:11:23.495 14358.342 - 14417.920: 85.7385% ( 110) 00:11:23.495 14417.920 - 14477.498: 86.8830% ( 115) 00:11:23.495 14477.498 - 14537.076: 87.8185% ( 94) 00:11:23.495 14537.076 - 14596.655: 88.7938% ( 98) 00:11:23.495 14596.655 - 14656.233: 89.5900% ( 80) 00:11:23.495 14656.233 - 14715.811: 90.5354% ( 95) 00:11:23.495 14715.811 - 14775.389: 91.2520% ( 72) 00:11:23.495 14775.389 - 14834.967: 91.9686% ( 72) 00:11:23.495 14834.967 - 14894.545: 92.6453% ( 68) 00:11:23.495 14894.545 - 14954.124: 93.3121% ( 67) 00:11:23.495 14954.124 - 15013.702: 93.8396% ( 53) 00:11:23.495 15013.702 - 15073.280: 94.3571% ( 52) 00:11:23.495 15073.280 - 15132.858: 94.8447% ( 49) 00:11:23.495 15132.858 - 15192.436: 95.2627% ( 42) 00:11:23.495 15192.436 - 15252.015: 95.5713% ( 31) 00:11:23.495 15252.015 - 15371.171: 96.1286% ( 56) 00:11:23.495 15371.171 - 15490.327: 96.5963% ( 47) 00:11:23.495 15490.327 - 15609.484: 96.9148% ( 32) 00:11:23.495 15609.484 - 15728.640: 97.2233% ( 31) 00:11:23.495 15728.640 - 15847.796: 97.3428% ( 12) 00:11:23.495 15847.796 - 15966.953: 97.4025% ( 6) 00:11:23.495 15966.953 - 16086.109: 97.4423% ( 4) 00:11:23.495 16086.109 - 16205.265: 97.4522% ( 1) 00:11:23.495 17635.142 - 17754.298: 97.5418% ( 9) 00:11:23.495 17754.298 - 17873.455: 97.5816% ( 4) 00:11:23.495 17873.455 - 17992.611: 97.6612% ( 8) 00:11:23.495 17992.611 - 18111.767: 97.7607% ( 10) 00:11:23.495 18111.767 - 18230.924: 97.8603% ( 10) 00:11:23.495 18230.924 - 18350.080: 97.9598% ( 10) 00:11:23.495 18350.080 - 18469.236: 98.0494% ( 9) 00:11:23.495 18469.236 - 18588.393: 98.1290% ( 8) 00:11:23.495 18588.393 - 18707.549: 98.2285% ( 10) 00:11:23.495 18707.549 - 18826.705: 98.2982% ( 7) 00:11:23.495 18826.705 - 18945.862: 98.3877% ( 9) 00:11:23.495 18945.862 - 19065.018: 98.4873% ( 10) 00:11:23.495 19065.018 - 19184.175: 98.5669% ( 8) 00:11:23.495 19184.175 - 19303.331: 98.6365% ( 7) 00:11:23.495 19303.331 - 19422.487: 98.6963% ( 6) 00:11:23.495 19422.487 - 19541.644: 98.7261% ( 3) 00:11:23.495 32887.156 - 33125.469: 98.7361% ( 1) 00:11:23.495 33125.469 - 33363.782: 98.7560% ( 2) 00:11:23.495 33363.782 - 33602.095: 98.8256% ( 7) 00:11:23.495 33602.095 - 33840.407: 98.8754% ( 5) 00:11:23.495 33840.407 - 34078.720: 98.9252% ( 5) 00:11:23.495 34078.720 - 34317.033: 98.9849% ( 6) 00:11:23.496 34317.033 - 34555.345: 99.0247% ( 4) 00:11:23.496 34555.345 - 34793.658: 99.0744% ( 5) 00:11:23.496 34793.658 - 35031.971: 99.1342% ( 6) 00:11:23.496 35031.971 - 35270.284: 99.1839% ( 5) 00:11:23.496 35270.284 - 35508.596: 99.2337% ( 5) 00:11:23.496 35508.596 - 35746.909: 99.3033% ( 7) 00:11:23.496 35746.909 - 35985.222: 99.3631% ( 6) 00:11:23.496 42419.665 - 42657.978: 99.4228% ( 6) 00:11:23.496 42657.978 - 42896.291: 99.4725% ( 5) 00:11:23.496 42896.291 - 43134.604: 99.5322% ( 6) 00:11:23.496 43134.604 - 43372.916: 99.5621% ( 3) 00:11:23.496 43372.916 - 43611.229: 99.6218% ( 6) 00:11:23.496 43611.229 - 43849.542: 99.6815% ( 6) 00:11:23.496 43849.542 - 44087.855: 99.7213% ( 4) 00:11:23.496 44087.855 - 44326.167: 99.7811% ( 6) 00:11:23.496 44326.167 - 44564.480: 99.8408% ( 6) 00:11:23.496 44564.480 - 44802.793: 99.8905% ( 5) 00:11:23.496 44802.793 - 45041.105: 99.9502% ( 6) 00:11:23.496 45041.105 - 45279.418: 100.0000% ( 5) 00:11:23.496 00:11:23.496 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:23.496 ============================================================================== 00:11:23.496 Range in us Cumulative IO count 00:11:23.496 9651.665 - 9711.244: 0.0597% ( 6) 00:11:23.496 9711.244 - 9770.822: 0.2986% ( 24) 00:11:23.496 9770.822 - 9830.400: 0.6867% ( 39) 00:11:23.496 9830.400 - 9889.978: 1.2241% ( 54) 00:11:23.496 9889.978 - 9949.556: 1.8810% ( 66) 00:11:23.496 9949.556 - 10009.135: 2.4084% ( 53) 00:11:23.496 10009.135 - 10068.713: 3.0255% ( 62) 00:11:23.496 10068.713 - 10128.291: 3.5729% ( 55) 00:11:23.496 10128.291 - 10187.869: 4.3292% ( 76) 00:11:23.496 10187.869 - 10247.447: 5.2349% ( 91) 00:11:23.496 10247.447 - 10307.025: 6.0908% ( 86) 00:11:23.496 10307.025 - 10366.604: 6.9068% ( 82) 00:11:23.496 10366.604 - 10426.182: 7.6732% ( 77) 00:11:23.496 10426.182 - 10485.760: 8.6286% ( 96) 00:11:23.496 10485.760 - 10545.338: 9.8428% ( 122) 00:11:23.496 10545.338 - 10604.916: 10.9275% ( 109) 00:11:23.496 10604.916 - 10664.495: 12.0820% ( 116) 00:11:23.496 10664.495 - 10724.073: 13.4256% ( 135) 00:11:23.496 10724.073 - 10783.651: 14.7990% ( 138) 00:11:23.496 10783.651 - 10843.229: 16.2520% ( 146) 00:11:23.496 10843.229 - 10902.807: 17.5856% ( 134) 00:11:23.496 10902.807 - 10962.385: 18.9789% ( 140) 00:11:23.496 10962.385 - 11021.964: 20.3225% ( 135) 00:11:23.496 11021.964 - 11081.542: 21.7755% ( 146) 00:11:23.496 11081.542 - 11141.120: 22.9996% ( 123) 00:11:23.496 11141.120 - 11200.698: 24.3929% ( 140) 00:11:23.496 11200.698 - 11260.276: 25.7564% ( 137) 00:11:23.496 11260.276 - 11319.855: 27.1895% ( 144) 00:11:23.496 11319.855 - 11379.433: 28.7221% ( 154) 00:11:23.496 11379.433 - 11439.011: 30.1851% ( 147) 00:11:23.496 11439.011 - 11498.589: 31.7576% ( 158) 00:11:23.496 11498.589 - 11558.167: 33.3499% ( 160) 00:11:23.496 11558.167 - 11617.745: 35.0020% ( 166) 00:11:23.496 11617.745 - 11677.324: 36.7536% ( 176) 00:11:23.496 11677.324 - 11736.902: 38.5052% ( 176) 00:11:23.496 11736.902 - 11796.480: 40.2966% ( 180) 00:11:23.496 11796.480 - 11856.058: 42.0581% ( 177) 00:11:23.496 11856.058 - 11915.636: 43.6007% ( 155) 00:11:23.496 11915.636 - 11975.215: 45.0736% ( 148) 00:11:23.496 11975.215 - 12034.793: 46.5267% ( 146) 00:11:23.496 12034.793 - 12094.371: 47.8105% ( 129) 00:11:23.496 12094.371 - 12153.949: 49.0645% ( 126) 00:11:23.496 12153.949 - 12213.527: 50.3284% ( 127) 00:11:23.496 12213.527 - 12273.105: 51.4431% ( 112) 00:11:23.496 12273.105 - 12332.684: 52.5876% ( 115) 00:11:23.496 12332.684 - 12392.262: 53.7221% ( 114) 00:11:23.496 12392.262 - 12451.840: 54.8069% ( 109) 00:11:23.496 12451.840 - 12511.418: 55.7424% ( 94) 00:11:23.496 12511.418 - 12570.996: 56.5983% ( 86) 00:11:23.496 12570.996 - 12630.575: 57.3447% ( 75) 00:11:23.496 12630.575 - 12690.153: 58.1807% ( 84) 00:11:23.496 12690.153 - 12749.731: 59.1361% ( 96) 00:11:23.496 12749.731 - 12809.309: 59.9323% ( 80) 00:11:23.496 12809.309 - 12868.887: 60.5892% ( 66) 00:11:23.496 12868.887 - 12928.465: 61.1564% ( 57) 00:11:23.496 12928.465 - 12988.044: 61.6740% ( 52) 00:11:23.496 12988.044 - 13047.622: 62.1318% ( 46) 00:11:23.496 13047.622 - 13107.200: 62.6592% ( 53) 00:11:23.496 13107.200 - 13166.778: 63.2763% ( 62) 00:11:23.496 13166.778 - 13226.356: 64.0127% ( 74) 00:11:23.496 13226.356 - 13285.935: 64.5601% ( 55) 00:11:23.496 13285.935 - 13345.513: 65.1075% ( 55) 00:11:23.496 13345.513 - 13405.091: 66.1326% ( 103) 00:11:23.496 13405.091 - 13464.669: 66.9486% ( 82) 00:11:23.496 13464.669 - 13524.247: 67.7747% ( 83) 00:11:23.496 13524.247 - 13583.825: 68.9192% ( 115) 00:11:23.496 13583.825 - 13643.404: 70.1135% ( 120) 00:11:23.496 13643.404 - 13702.982: 71.2679% ( 116) 00:11:23.496 13702.982 - 13762.560: 72.4423% ( 118) 00:11:23.496 13762.560 - 13822.138: 73.6465% ( 121) 00:11:23.496 13822.138 - 13881.716: 74.9502% ( 131) 00:11:23.496 13881.716 - 13941.295: 76.2341% ( 129) 00:11:23.496 13941.295 - 14000.873: 77.6174% ( 139) 00:11:23.496 14000.873 - 14060.451: 79.0904% ( 148) 00:11:23.496 14060.451 - 14120.029: 80.4737% ( 139) 00:11:23.496 14120.029 - 14179.607: 81.8571% ( 139) 00:11:23.496 14179.607 - 14239.185: 83.2404% ( 139) 00:11:23.496 14239.185 - 14298.764: 84.5740% ( 134) 00:11:23.496 14298.764 - 14358.342: 85.7186% ( 115) 00:11:23.496 14358.342 - 14417.920: 86.8232% ( 111) 00:11:23.496 14417.920 - 14477.498: 87.8185% ( 100) 00:11:23.496 14477.498 - 14537.076: 88.7241% ( 91) 00:11:23.496 14537.076 - 14596.655: 89.5701% ( 85) 00:11:23.496 14596.655 - 14656.233: 90.3264% ( 76) 00:11:23.496 14656.233 - 14715.811: 91.0131% ( 69) 00:11:23.496 14715.811 - 14775.389: 91.7695% ( 76) 00:11:23.496 14775.389 - 14834.967: 92.5060% ( 74) 00:11:23.496 14834.967 - 14894.545: 93.2822% ( 78) 00:11:23.496 14894.545 - 14954.124: 93.9391% ( 66) 00:11:23.496 14954.124 - 15013.702: 94.4865% ( 55) 00:11:23.496 15013.702 - 15073.280: 95.0239% ( 54) 00:11:23.496 15073.280 - 15132.858: 95.3921% ( 37) 00:11:23.496 15132.858 - 15192.436: 95.7703% ( 38) 00:11:23.496 15192.436 - 15252.015: 96.0888% ( 32) 00:11:23.496 15252.015 - 15371.171: 96.5864% ( 50) 00:11:23.496 15371.171 - 15490.327: 96.8850% ( 30) 00:11:23.496 15490.327 - 15609.484: 97.1238% ( 24) 00:11:23.496 15609.484 - 15728.640: 97.2631% ( 14) 00:11:23.496 15728.640 - 15847.796: 97.3925% ( 13) 00:11:23.496 15847.796 - 15966.953: 97.4522% ( 6) 00:11:23.496 17754.298 - 17873.455: 97.5119% ( 6) 00:11:23.496 17873.455 - 17992.611: 97.6314% ( 12) 00:11:23.496 17992.611 - 18111.767: 97.7010% ( 7) 00:11:23.496 18111.767 - 18230.924: 97.8006% ( 10) 00:11:23.496 18230.924 - 18350.080: 97.9100% ( 11) 00:11:23.496 18350.080 - 18469.236: 98.0195% ( 11) 00:11:23.496 18469.236 - 18588.393: 98.1290% ( 11) 00:11:23.496 18588.393 - 18707.549: 98.2385% ( 11) 00:11:23.496 18707.549 - 18826.705: 98.3479% ( 11) 00:11:23.496 18826.705 - 18945.862: 98.4674% ( 12) 00:11:23.496 18945.862 - 19065.018: 98.5171% ( 5) 00:11:23.496 19065.018 - 19184.175: 98.5569% ( 4) 00:11:23.496 19184.175 - 19303.331: 98.6166% ( 6) 00:11:23.496 19303.331 - 19422.487: 98.6664% ( 5) 00:11:23.496 19422.487 - 19541.644: 98.7261% ( 6) 00:11:23.496 31933.905 - 32172.218: 98.7858% ( 6) 00:11:23.496 32172.218 - 32410.531: 98.8455% ( 6) 00:11:23.496 32410.531 - 32648.844: 98.8953% ( 5) 00:11:23.496 32648.844 - 32887.156: 98.9650% ( 7) 00:11:23.496 32887.156 - 33125.469: 99.0247% ( 6) 00:11:23.496 33125.469 - 33363.782: 99.0844% ( 6) 00:11:23.496 33363.782 - 33602.095: 99.1441% ( 6) 00:11:23.496 33602.095 - 33840.407: 99.2038% ( 6) 00:11:23.496 33840.407 - 34078.720: 99.2635% ( 6) 00:11:23.496 34078.720 - 34317.033: 99.3133% ( 5) 00:11:23.496 34317.033 - 34555.345: 99.3631% ( 5) 00:11:23.496 40036.538 - 40274.851: 99.3929% ( 3) 00:11:23.496 40274.851 - 40513.164: 99.4626% ( 7) 00:11:23.496 40513.164 - 40751.476: 99.5223% ( 6) 00:11:23.496 40751.476 - 40989.789: 99.5721% ( 5) 00:11:23.496 40989.789 - 41228.102: 99.6318% ( 6) 00:11:23.496 41228.102 - 41466.415: 99.7014% ( 7) 00:11:23.497 41466.415 - 41704.727: 99.7611% ( 6) 00:11:23.497 41704.727 - 41943.040: 99.8209% ( 6) 00:11:23.497 41943.040 - 42181.353: 99.8806% ( 6) 00:11:23.497 42181.353 - 42419.665: 99.9403% ( 6) 00:11:23.497 42419.665 - 42657.978: 100.0000% ( 6) 00:11:23.497 00:11:23.497 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:23.497 ============================================================================== 00:11:23.497 Range in us Cumulative IO count 00:11:23.497 9532.509 - 9592.087: 0.0100% ( 1) 00:11:23.497 9592.087 - 9651.665: 0.0299% ( 2) 00:11:23.497 9651.665 - 9711.244: 0.0896% ( 6) 00:11:23.497 9711.244 - 9770.822: 0.3185% ( 23) 00:11:23.497 9770.822 - 9830.400: 0.5673% ( 25) 00:11:23.497 9830.400 - 9889.978: 0.8559% ( 29) 00:11:23.497 9889.978 - 9949.556: 1.4729% ( 62) 00:11:23.497 9949.556 - 10009.135: 2.0402% ( 57) 00:11:23.497 10009.135 - 10068.713: 2.6771% ( 64) 00:11:23.497 10068.713 - 10128.291: 3.4236% ( 75) 00:11:23.497 10128.291 - 10187.869: 4.1202% ( 70) 00:11:23.497 10187.869 - 10247.447: 4.8268% ( 71) 00:11:23.497 10247.447 - 10307.025: 5.7524% ( 93) 00:11:23.497 10307.025 - 10366.604: 6.5784% ( 83) 00:11:23.497 10366.604 - 10426.182: 7.5836% ( 101) 00:11:23.497 10426.182 - 10485.760: 8.6286% ( 105) 00:11:23.497 10485.760 - 10545.338: 9.8029% ( 118) 00:11:23.497 10545.338 - 10604.916: 11.0868% ( 129) 00:11:23.497 10604.916 - 10664.495: 12.1417% ( 106) 00:11:23.497 10664.495 - 10724.073: 13.2763% ( 114) 00:11:23.497 10724.073 - 10783.651: 14.5999% ( 133) 00:11:23.497 10783.651 - 10843.229: 16.1127% ( 152) 00:11:23.497 10843.229 - 10902.807: 17.5259% ( 142) 00:11:23.497 10902.807 - 10962.385: 19.0585% ( 154) 00:11:23.497 10962.385 - 11021.964: 20.7703% ( 172) 00:11:23.497 11021.964 - 11081.542: 22.4124% ( 165) 00:11:23.497 11081.542 - 11141.120: 23.8555% ( 145) 00:11:23.497 11141.120 - 11200.698: 25.3782% ( 153) 00:11:23.497 11200.698 - 11260.276: 26.7615% ( 139) 00:11:23.497 11260.276 - 11319.855: 28.3041% ( 155) 00:11:23.497 11319.855 - 11379.433: 29.7074% ( 141) 00:11:23.497 11379.433 - 11439.011: 31.0808% ( 138) 00:11:23.497 11439.011 - 11498.589: 32.5936% ( 152) 00:11:23.497 11498.589 - 11558.167: 34.1760% ( 159) 00:11:23.497 11558.167 - 11617.745: 35.7385% ( 157) 00:11:23.497 11617.745 - 11677.324: 37.3507% ( 162) 00:11:23.497 11677.324 - 11736.902: 38.8834% ( 154) 00:11:23.497 11736.902 - 11796.480: 40.5553% ( 168) 00:11:23.497 11796.480 - 11856.058: 41.9785% ( 143) 00:11:23.497 11856.058 - 11915.636: 43.4216% ( 145) 00:11:23.497 11915.636 - 11975.215: 44.7751% ( 136) 00:11:23.497 11975.215 - 12034.793: 45.9992% ( 123) 00:11:23.497 12034.793 - 12094.371: 47.2432% ( 125) 00:11:23.497 12094.371 - 12153.949: 48.4475% ( 121) 00:11:23.497 12153.949 - 12213.527: 49.6318% ( 119) 00:11:23.497 12213.527 - 12273.105: 50.7862% ( 116) 00:11:23.497 12273.105 - 12332.684: 51.8611% ( 108) 00:11:23.497 12332.684 - 12392.262: 52.9459% ( 109) 00:11:23.497 12392.262 - 12451.840: 53.9510% ( 101) 00:11:23.497 12451.840 - 12511.418: 54.9562% ( 101) 00:11:23.497 12511.418 - 12570.996: 55.8718% ( 92) 00:11:23.497 12570.996 - 12630.575: 56.7874% ( 92) 00:11:23.497 12630.575 - 12690.153: 57.7030% ( 92) 00:11:23.497 12690.153 - 12749.731: 58.4395% ( 74) 00:11:23.497 12749.731 - 12809.309: 59.2257% ( 79) 00:11:23.497 12809.309 - 12868.887: 59.8726% ( 65) 00:11:23.497 12868.887 - 12928.465: 60.5195% ( 65) 00:11:23.497 12928.465 - 12988.044: 61.1266% ( 61) 00:11:23.497 12988.044 - 13047.622: 61.6939% ( 57) 00:11:23.497 13047.622 - 13107.200: 62.3905% ( 70) 00:11:23.497 13107.200 - 13166.778: 62.9976% ( 61) 00:11:23.497 13166.778 - 13226.356: 63.6545% ( 66) 00:11:23.497 13226.356 - 13285.935: 64.3113% ( 66) 00:11:23.497 13285.935 - 13345.513: 65.2170% ( 91) 00:11:23.497 13345.513 - 13405.091: 66.1226% ( 91) 00:11:23.497 13405.091 - 13464.669: 67.1477% ( 103) 00:11:23.497 13464.669 - 13524.247: 68.3221% ( 118) 00:11:23.497 13524.247 - 13583.825: 69.5561% ( 124) 00:11:23.497 13583.825 - 13643.404: 70.7703% ( 122) 00:11:23.497 13643.404 - 13702.982: 71.9447% ( 118) 00:11:23.497 13702.982 - 13762.560: 73.2086% ( 127) 00:11:23.497 13762.560 - 13822.138: 74.3830% ( 118) 00:11:23.497 13822.138 - 13881.716: 75.5474% ( 117) 00:11:23.497 13881.716 - 13941.295: 76.7715% ( 123) 00:11:23.497 13941.295 - 14000.873: 77.9061% ( 114) 00:11:23.497 14000.873 - 14060.451: 79.2695% ( 137) 00:11:23.497 14060.451 - 14120.029: 80.6031% ( 134) 00:11:23.497 14120.029 - 14179.607: 81.8073% ( 121) 00:11:23.497 14179.607 - 14239.185: 83.0514% ( 125) 00:11:23.497 14239.185 - 14298.764: 84.2456% ( 120) 00:11:23.497 14298.764 - 14358.342: 85.4100% ( 117) 00:11:23.497 14358.342 - 14417.920: 86.4948% ( 109) 00:11:23.497 14417.920 - 14477.498: 87.5597% ( 107) 00:11:23.497 14477.498 - 14537.076: 88.5350% ( 98) 00:11:23.497 14537.076 - 14596.655: 89.4705% ( 94) 00:11:23.497 14596.655 - 14656.233: 90.3165% ( 85) 00:11:23.497 14656.233 - 14715.811: 91.1027% ( 79) 00:11:23.497 14715.811 - 14775.389: 91.8889% ( 79) 00:11:23.497 14775.389 - 14834.967: 92.7050% ( 82) 00:11:23.497 14834.967 - 14894.545: 93.3320% ( 63) 00:11:23.497 14894.545 - 14954.124: 93.9391% ( 61) 00:11:23.497 14954.124 - 15013.702: 94.4666% ( 53) 00:11:23.497 15013.702 - 15073.280: 94.9244% ( 46) 00:11:23.497 15073.280 - 15132.858: 95.3025% ( 38) 00:11:23.497 15132.858 - 15192.436: 95.6608% ( 36) 00:11:23.497 15192.436 - 15252.015: 96.0092% ( 35) 00:11:23.497 15252.015 - 15371.171: 96.5466% ( 54) 00:11:23.497 15371.171 - 15490.327: 96.9049% ( 36) 00:11:23.497 15490.327 - 15609.484: 97.1238% ( 22) 00:11:23.497 15609.484 - 15728.640: 97.2333% ( 11) 00:11:23.497 15728.640 - 15847.796: 97.3428% ( 11) 00:11:23.497 15847.796 - 15966.953: 97.4224% ( 8) 00:11:23.497 15966.953 - 16086.109: 97.4522% ( 3) 00:11:23.497 17515.985 - 17635.142: 97.5717% ( 12) 00:11:23.497 17635.142 - 17754.298: 97.6612% ( 9) 00:11:23.497 17754.298 - 17873.455: 97.7807% ( 12) 00:11:23.497 17873.455 - 17992.611: 97.8901% ( 11) 00:11:23.497 17992.611 - 18111.767: 98.0096% ( 12) 00:11:23.497 18111.767 - 18230.924: 98.1190% ( 11) 00:11:23.497 18230.924 - 18350.080: 98.2285% ( 11) 00:11:23.497 18350.080 - 18469.236: 98.3280% ( 10) 00:11:23.497 18469.236 - 18588.393: 98.4275% ( 10) 00:11:23.497 18588.393 - 18707.549: 98.5271% ( 10) 00:11:23.497 18707.549 - 18826.705: 98.5967% ( 7) 00:11:23.497 18826.705 - 18945.862: 98.6266% ( 3) 00:11:23.497 18945.862 - 19065.018: 98.6564% ( 3) 00:11:23.497 19065.018 - 19184.175: 98.6963% ( 4) 00:11:23.497 19184.175 - 19303.331: 98.7261% ( 3) 00:11:23.497 30027.404 - 30146.560: 98.7361% ( 1) 00:11:23.497 30146.560 - 30265.716: 98.7560% ( 2) 00:11:23.497 30265.716 - 30384.873: 98.7858% ( 3) 00:11:23.497 30384.873 - 30504.029: 98.8157% ( 3) 00:11:23.497 30504.029 - 30742.342: 98.8754% ( 6) 00:11:23.497 30742.342 - 30980.655: 98.9351% ( 6) 00:11:23.497 30980.655 - 31218.967: 98.9948% ( 6) 00:11:23.497 31218.967 - 31457.280: 99.0545% ( 6) 00:11:23.497 31457.280 - 31695.593: 99.1143% ( 6) 00:11:23.497 31695.593 - 31933.905: 99.1740% ( 6) 00:11:23.497 31933.905 - 32172.218: 99.2337% ( 6) 00:11:23.497 32172.218 - 32410.531: 99.2934% ( 6) 00:11:23.497 32410.531 - 32648.844: 99.3631% ( 7) 00:11:23.497 38130.036 - 38368.349: 99.3830% ( 2) 00:11:23.497 38368.349 - 38606.662: 99.4327% ( 5) 00:11:23.497 38606.662 - 38844.975: 99.4924% ( 6) 00:11:23.497 38844.975 - 39083.287: 99.5422% ( 5) 00:11:23.497 39083.287 - 39321.600: 99.6019% ( 6) 00:11:23.497 39321.600 - 39559.913: 99.6716% ( 7) 00:11:23.497 39559.913 - 39798.225: 99.7313% ( 6) 00:11:23.497 39798.225 - 40036.538: 99.7910% ( 6) 00:11:23.497 40036.538 - 40274.851: 99.8507% ( 6) 00:11:23.497 40274.851 - 40513.164: 99.9104% ( 6) 00:11:23.497 40513.164 - 40751.476: 99.9701% ( 6) 00:11:23.497 40751.476 - 40989.789: 100.0000% ( 3) 00:11:23.497 00:11:23.497 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:23.497 ============================================================================== 00:11:23.497 Range in us Cumulative IO count 00:11:23.497 9532.509 - 9592.087: 0.0100% ( 1) 00:11:23.497 9592.087 - 9651.665: 0.0199% ( 1) 00:11:23.497 9651.665 - 9711.244: 0.0398% ( 2) 00:11:23.497 9711.244 - 9770.822: 0.1095% ( 7) 00:11:23.497 9770.822 - 9830.400: 0.3085% ( 20) 00:11:23.497 9830.400 - 9889.978: 0.6768% ( 37) 00:11:23.497 9889.978 - 9949.556: 1.3236% ( 65) 00:11:23.497 9949.556 - 10009.135: 1.9705% ( 65) 00:11:23.497 10009.135 - 10068.713: 2.7966% ( 83) 00:11:23.497 10068.713 - 10128.291: 3.4634% ( 67) 00:11:23.498 10128.291 - 10187.869: 4.1302% ( 67) 00:11:23.498 10187.869 - 10247.447: 4.9363% ( 81) 00:11:23.498 10247.447 - 10307.025: 5.7225% ( 79) 00:11:23.498 10307.025 - 10366.604: 6.5287% ( 81) 00:11:23.498 10366.604 - 10426.182: 7.3447% ( 82) 00:11:23.498 10426.182 - 10485.760: 8.3897% ( 105) 00:11:23.498 10485.760 - 10545.338: 9.5044% ( 112) 00:11:23.498 10545.338 - 10604.916: 10.6887% ( 119) 00:11:23.498 10604.916 - 10664.495: 11.8730% ( 119) 00:11:23.498 10664.495 - 10724.073: 13.1270% ( 126) 00:11:23.498 10724.073 - 10783.651: 14.4805% ( 136) 00:11:23.498 10783.651 - 10843.229: 15.9335% ( 146) 00:11:23.498 10843.229 - 10902.807: 17.4562% ( 153) 00:11:23.498 10902.807 - 10962.385: 18.9590% ( 151) 00:11:23.498 10962.385 - 11021.964: 20.4319% ( 148) 00:11:23.498 11021.964 - 11081.542: 21.7556% ( 133) 00:11:23.498 11081.542 - 11141.120: 23.0892% ( 134) 00:11:23.498 11141.120 - 11200.698: 24.4825% ( 140) 00:11:23.498 11200.698 - 11260.276: 25.7166% ( 124) 00:11:23.498 11260.276 - 11319.855: 27.0402% ( 133) 00:11:23.498 11319.855 - 11379.433: 28.4634% ( 143) 00:11:23.498 11379.433 - 11439.011: 30.0259% ( 157) 00:11:23.498 11439.011 - 11498.589: 31.8173% ( 180) 00:11:23.498 11498.589 - 11558.167: 33.5191% ( 171) 00:11:23.498 11558.167 - 11617.745: 35.2906% ( 178) 00:11:23.498 11617.745 - 11677.324: 37.0422% ( 176) 00:11:23.498 11677.324 - 11736.902: 38.7042% ( 167) 00:11:23.498 11736.902 - 11796.480: 40.2170% ( 152) 00:11:23.498 11796.480 - 11856.058: 41.7894% ( 158) 00:11:23.498 11856.058 - 11915.636: 43.2623% ( 148) 00:11:23.498 11915.636 - 11975.215: 44.6855% ( 143) 00:11:23.498 11975.215 - 12034.793: 45.9494% ( 127) 00:11:23.498 12034.793 - 12094.371: 47.2034% ( 126) 00:11:23.498 12094.371 - 12153.949: 48.3977% ( 120) 00:11:23.498 12153.949 - 12213.527: 49.5521% ( 116) 00:11:23.498 12213.527 - 12273.105: 50.6768% ( 113) 00:11:23.498 12273.105 - 12332.684: 51.8511% ( 118) 00:11:23.498 12332.684 - 12392.262: 52.9658% ( 112) 00:11:23.498 12392.262 - 12451.840: 54.1103% ( 115) 00:11:23.498 12451.840 - 12511.418: 55.1553% ( 105) 00:11:23.498 12511.418 - 12570.996: 56.1405% ( 99) 00:11:23.498 12570.996 - 12630.575: 57.0561% ( 92) 00:11:23.498 12630.575 - 12690.153: 57.9817% ( 93) 00:11:23.498 12690.153 - 12749.731: 58.6883% ( 71) 00:11:23.498 12749.731 - 12809.309: 59.4248% ( 74) 00:11:23.498 12809.309 - 12868.887: 60.2110% ( 79) 00:11:23.498 12868.887 - 12928.465: 60.9574% ( 75) 00:11:23.498 12928.465 - 12988.044: 61.5446% ( 59) 00:11:23.498 12988.044 - 13047.622: 62.1517% ( 61) 00:11:23.498 13047.622 - 13107.200: 62.8583% ( 71) 00:11:23.498 13107.200 - 13166.778: 63.5052% ( 65) 00:11:23.498 13166.778 - 13226.356: 64.2118% ( 71) 00:11:23.498 13226.356 - 13285.935: 64.8388% ( 63) 00:11:23.498 13285.935 - 13345.513: 65.5354% ( 70) 00:11:23.498 13345.513 - 13405.091: 66.3814% ( 85) 00:11:23.498 13405.091 - 13464.669: 67.3268% ( 95) 00:11:23.498 13464.669 - 13524.247: 68.4216% ( 110) 00:11:23.498 13524.247 - 13583.825: 69.5860% ( 117) 00:11:23.498 13583.825 - 13643.404: 70.8798% ( 130) 00:11:23.498 13643.404 - 13702.982: 72.1835% ( 131) 00:11:23.498 13702.982 - 13762.560: 73.3280% ( 115) 00:11:23.498 13762.560 - 13822.138: 74.5024% ( 118) 00:11:23.498 13822.138 - 13881.716: 75.6469% ( 115) 00:11:23.498 13881.716 - 13941.295: 76.7715% ( 113) 00:11:23.498 13941.295 - 14000.873: 77.9558% ( 119) 00:11:23.498 14000.873 - 14060.451: 79.3591% ( 141) 00:11:23.498 14060.451 - 14120.029: 80.6230% ( 127) 00:11:23.498 14120.029 - 14179.607: 81.9168% ( 130) 00:11:23.498 14179.607 - 14239.185: 82.9817% ( 107) 00:11:23.498 14239.185 - 14298.764: 84.1162% ( 114) 00:11:23.498 14298.764 - 14358.342: 85.2408% ( 113) 00:11:23.498 14358.342 - 14417.920: 86.4650% ( 123) 00:11:23.498 14417.920 - 14477.498: 87.6095% ( 115) 00:11:23.498 14477.498 - 14537.076: 88.7142% ( 111) 00:11:23.498 14537.076 - 14596.655: 89.7193% ( 101) 00:11:23.498 14596.655 - 14656.233: 90.6350% ( 92) 00:11:23.498 14656.233 - 14715.811: 91.4311% ( 80) 00:11:23.498 14715.811 - 14775.389: 92.1576% ( 73) 00:11:23.498 14775.389 - 14834.967: 92.9240% ( 77) 00:11:23.498 14834.967 - 14894.545: 93.6007% ( 68) 00:11:23.498 14894.545 - 14954.124: 94.2277% ( 63) 00:11:23.498 14954.124 - 15013.702: 94.7452% ( 52) 00:11:23.498 15013.702 - 15073.280: 95.2627% ( 52) 00:11:23.498 15073.280 - 15132.858: 95.7205% ( 46) 00:11:23.498 15132.858 - 15192.436: 96.0689% ( 35) 00:11:23.498 15192.436 - 15252.015: 96.3376% ( 27) 00:11:23.498 15252.015 - 15371.171: 96.7556% ( 42) 00:11:23.498 15371.171 - 15490.327: 97.0243% ( 27) 00:11:23.498 15490.327 - 15609.484: 97.1835% ( 16) 00:11:23.498 15609.484 - 15728.640: 97.2930% ( 11) 00:11:23.498 15728.640 - 15847.796: 97.3726% ( 8) 00:11:23.498 15847.796 - 15966.953: 97.4323% ( 6) 00:11:23.498 15966.953 - 16086.109: 97.4522% ( 2) 00:11:23.498 17515.985 - 17635.142: 97.5318% ( 8) 00:11:23.498 17635.142 - 17754.298: 97.6513% ( 12) 00:11:23.498 17754.298 - 17873.455: 97.6911% ( 4) 00:11:23.498 17873.455 - 17992.611: 97.8105% ( 12) 00:11:23.498 17992.611 - 18111.767: 97.9797% ( 17) 00:11:23.498 18111.767 - 18230.924: 98.0693% ( 9) 00:11:23.498 18230.924 - 18350.080: 98.1489% ( 8) 00:11:23.498 18350.080 - 18469.236: 98.2484% ( 10) 00:11:23.498 18469.236 - 18588.393: 98.3479% ( 10) 00:11:23.498 18588.393 - 18707.549: 98.4375% ( 9) 00:11:23.498 18707.549 - 18826.705: 98.5370% ( 10) 00:11:23.498 18826.705 - 18945.862: 98.6465% ( 11) 00:11:23.498 18945.862 - 19065.018: 98.7261% ( 8) 00:11:23.498 27525.120 - 27644.276: 98.7460% ( 2) 00:11:23.498 27644.276 - 27763.433: 98.7659% ( 2) 00:11:23.498 27763.433 - 27882.589: 98.7958% ( 3) 00:11:23.498 27882.589 - 28001.745: 98.8256% ( 3) 00:11:23.498 28001.745 - 28120.902: 98.8455% ( 2) 00:11:23.498 28120.902 - 28240.058: 98.8754% ( 3) 00:11:23.498 28240.058 - 28359.215: 98.9152% ( 4) 00:11:23.498 28359.215 - 28478.371: 98.9451% ( 3) 00:11:23.498 28478.371 - 28597.527: 98.9749% ( 3) 00:11:23.498 28597.527 - 28716.684: 99.0048% ( 3) 00:11:23.498 28716.684 - 28835.840: 99.0346% ( 3) 00:11:23.498 28835.840 - 28954.996: 99.0645% ( 3) 00:11:23.498 28954.996 - 29074.153: 99.0943% ( 3) 00:11:23.498 29074.153 - 29193.309: 99.1242% ( 3) 00:11:23.498 29193.309 - 29312.465: 99.1541% ( 3) 00:11:23.498 29312.465 - 29431.622: 99.1839% ( 3) 00:11:23.498 29431.622 - 29550.778: 99.2138% ( 3) 00:11:23.498 29550.778 - 29669.935: 99.2436% ( 3) 00:11:23.498 29669.935 - 29789.091: 99.2735% ( 3) 00:11:23.498 29789.091 - 29908.247: 99.3033% ( 3) 00:11:23.498 29908.247 - 30027.404: 99.3332% ( 3) 00:11:23.498 30027.404 - 30146.560: 99.3631% ( 3) 00:11:23.498 35746.909 - 35985.222: 99.4029% ( 4) 00:11:23.498 35985.222 - 36223.535: 99.4725% ( 7) 00:11:23.498 36223.535 - 36461.847: 99.5223% ( 5) 00:11:23.498 36461.847 - 36700.160: 99.5820% ( 6) 00:11:23.498 36700.160 - 36938.473: 99.6417% ( 6) 00:11:23.498 36938.473 - 37176.785: 99.7014% ( 6) 00:11:23.498 37176.785 - 37415.098: 99.7512% ( 5) 00:11:23.498 37415.098 - 37653.411: 99.8109% ( 6) 00:11:23.498 37653.411 - 37891.724: 99.8706% ( 6) 00:11:23.498 37891.724 - 38130.036: 99.9204% ( 5) 00:11:23.498 38130.036 - 38368.349: 99.9701% ( 5) 00:11:23.498 38368.349 - 38606.662: 100.0000% ( 3) 00:11:23.498 00:11:23.498 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:23.498 ============================================================================== 00:11:23.498 Range in us Cumulative IO count 00:11:23.498 9651.665 - 9711.244: 0.0995% ( 10) 00:11:23.498 9711.244 - 9770.822: 0.2588% ( 16) 00:11:23.498 9770.822 - 9830.400: 0.4976% ( 24) 00:11:23.498 9830.400 - 9889.978: 0.7663% ( 27) 00:11:23.498 9889.978 - 9949.556: 1.3236% ( 56) 00:11:23.498 9949.556 - 10009.135: 1.9904% ( 67) 00:11:23.498 10009.135 - 10068.713: 2.5776% ( 59) 00:11:23.498 10068.713 - 10128.291: 3.3937% ( 82) 00:11:23.498 10128.291 - 10187.869: 4.3193% ( 93) 00:11:23.498 10187.869 - 10247.447: 5.1553% ( 84) 00:11:23.498 10247.447 - 10307.025: 5.8420% ( 69) 00:11:23.498 10307.025 - 10366.604: 6.5884% ( 75) 00:11:23.499 10366.604 - 10426.182: 7.3248% ( 74) 00:11:23.499 10426.182 - 10485.760: 8.1409% ( 82) 00:11:23.499 10485.760 - 10545.338: 9.1561% ( 102) 00:11:23.499 10545.338 - 10604.916: 10.3901% ( 124) 00:11:23.499 10604.916 - 10664.495: 11.7436% ( 136) 00:11:23.499 10664.495 - 10724.073: 13.2663% ( 153) 00:11:23.499 10724.073 - 10783.651: 14.7293% ( 147) 00:11:23.499 10783.651 - 10843.229: 16.0629% ( 134) 00:11:23.499 10843.229 - 10902.807: 17.3965% ( 134) 00:11:23.499 10902.807 - 10962.385: 18.8097% ( 142) 00:11:23.499 10962.385 - 11021.964: 20.2528% ( 145) 00:11:23.499 11021.964 - 11081.542: 21.7456% ( 150) 00:11:23.499 11081.542 - 11141.120: 23.3081% ( 157) 00:11:23.499 11141.120 - 11200.698: 24.7014% ( 140) 00:11:23.499 11200.698 - 11260.276: 26.0947% ( 140) 00:11:23.499 11260.276 - 11319.855: 27.3985% ( 131) 00:11:23.499 11319.855 - 11379.433: 28.8615% ( 147) 00:11:23.499 11379.433 - 11439.011: 30.3941% ( 154) 00:11:23.499 11439.011 - 11498.589: 31.9566% ( 157) 00:11:23.499 11498.589 - 11558.167: 33.3698% ( 142) 00:11:23.499 11558.167 - 11617.745: 34.7830% ( 142) 00:11:23.499 11617.745 - 11677.324: 36.4152% ( 164) 00:11:23.499 11677.324 - 11736.902: 38.2862% ( 188) 00:11:23.499 11736.902 - 11796.480: 39.9781% ( 170) 00:11:23.499 11796.480 - 11856.058: 41.5207% ( 155) 00:11:23.499 11856.058 - 11915.636: 43.0832% ( 157) 00:11:23.499 11915.636 - 11975.215: 44.4964% ( 142) 00:11:23.499 11975.215 - 12034.793: 45.8499% ( 136) 00:11:23.499 12034.793 - 12094.371: 47.2333% ( 139) 00:11:23.499 12094.371 - 12153.949: 48.5669% ( 134) 00:11:23.499 12153.949 - 12213.527: 49.8308% ( 127) 00:11:23.499 12213.527 - 12273.105: 51.1047% ( 128) 00:11:23.499 12273.105 - 12332.684: 52.2094% ( 111) 00:11:23.499 12332.684 - 12392.262: 53.2942% ( 109) 00:11:23.499 12392.262 - 12451.840: 54.3491% ( 106) 00:11:23.499 12451.840 - 12511.418: 55.2647% ( 92) 00:11:23.499 12511.418 - 12570.996: 56.2400% ( 98) 00:11:23.499 12570.996 - 12630.575: 57.1357% ( 90) 00:11:23.499 12630.575 - 12690.153: 57.9518% ( 82) 00:11:23.499 12690.153 - 12749.731: 58.7381% ( 79) 00:11:23.499 12749.731 - 12809.309: 59.3750% ( 64) 00:11:23.499 12809.309 - 12868.887: 59.9522% ( 58) 00:11:23.499 12868.887 - 12928.465: 60.5295% ( 58) 00:11:23.499 12928.465 - 12988.044: 61.1166% ( 59) 00:11:23.499 12988.044 - 13047.622: 61.7934% ( 68) 00:11:23.499 13047.622 - 13107.200: 62.5697% ( 78) 00:11:23.499 13107.200 - 13166.778: 63.3559% ( 79) 00:11:23.499 13166.778 - 13226.356: 64.0725% ( 72) 00:11:23.499 13226.356 - 13285.935: 64.7691% ( 70) 00:11:23.499 13285.935 - 13345.513: 65.4857% ( 72) 00:11:23.499 13345.513 - 13405.091: 66.3217% ( 84) 00:11:23.499 13405.091 - 13464.669: 67.2373% ( 92) 00:11:23.499 13464.669 - 13524.247: 68.2623% ( 103) 00:11:23.499 13524.247 - 13583.825: 69.4666% ( 121) 00:11:23.499 13583.825 - 13643.404: 70.6608% ( 120) 00:11:23.499 13643.404 - 13702.982: 72.1636% ( 151) 00:11:23.499 13702.982 - 13762.560: 73.4773% ( 132) 00:11:23.499 13762.560 - 13822.138: 74.5024% ( 103) 00:11:23.499 13822.138 - 13881.716: 75.5474% ( 105) 00:11:23.499 13881.716 - 13941.295: 76.5725% ( 103) 00:11:23.499 13941.295 - 14000.873: 77.7568% ( 119) 00:11:23.499 14000.873 - 14060.451: 79.1003% ( 135) 00:11:23.499 14060.451 - 14120.029: 80.3842% ( 129) 00:11:23.499 14120.029 - 14179.607: 81.7377% ( 136) 00:11:23.499 14179.607 - 14239.185: 83.0713% ( 134) 00:11:23.499 14239.185 - 14298.764: 84.2456% ( 118) 00:11:23.499 14298.764 - 14358.342: 85.4001% ( 116) 00:11:23.499 14358.342 - 14417.920: 86.4749% ( 108) 00:11:23.499 14417.920 - 14477.498: 87.5498% ( 108) 00:11:23.499 14477.498 - 14537.076: 88.5251% ( 98) 00:11:23.499 14537.076 - 14596.655: 89.5203% ( 100) 00:11:23.499 14596.655 - 14656.233: 90.4956% ( 98) 00:11:23.499 14656.233 - 14715.811: 91.3217% ( 83) 00:11:23.499 14715.811 - 14775.389: 92.0581% ( 74) 00:11:23.499 14775.389 - 14834.967: 92.8244% ( 77) 00:11:23.499 14834.967 - 14894.545: 93.5211% ( 70) 00:11:23.499 14894.545 - 14954.124: 94.1282% ( 61) 00:11:23.499 14954.124 - 15013.702: 94.6955% ( 57) 00:11:23.499 15013.702 - 15073.280: 95.2130% ( 52) 00:11:23.499 15073.280 - 15132.858: 95.5912% ( 38) 00:11:23.499 15132.858 - 15192.436: 95.9096% ( 32) 00:11:23.499 15192.436 - 15252.015: 96.1883% ( 28) 00:11:23.499 15252.015 - 15371.171: 96.6561% ( 47) 00:11:23.499 15371.171 - 15490.327: 97.0442% ( 39) 00:11:23.499 15490.327 - 15609.484: 97.2333% ( 19) 00:11:23.499 15609.484 - 15728.640: 97.3428% ( 11) 00:11:23.499 15728.640 - 15847.796: 97.4025% ( 6) 00:11:23.499 15847.796 - 15966.953: 97.4522% ( 5) 00:11:23.499 17277.673 - 17396.829: 97.5119% ( 6) 00:11:23.499 17396.829 - 17515.985: 97.5816% ( 7) 00:11:23.499 17515.985 - 17635.142: 97.6115% ( 3) 00:11:23.499 17635.142 - 17754.298: 97.6413% ( 3) 00:11:23.499 17754.298 - 17873.455: 97.7607% ( 12) 00:11:23.499 17873.455 - 17992.611: 97.8802% ( 12) 00:11:23.499 17992.611 - 18111.767: 97.9697% ( 9) 00:11:23.499 18111.767 - 18230.924: 98.0494% ( 8) 00:11:23.499 18230.924 - 18350.080: 98.1389% ( 9) 00:11:23.499 18350.080 - 18469.236: 98.2285% ( 9) 00:11:23.499 18469.236 - 18588.393: 98.3181% ( 9) 00:11:23.499 18588.393 - 18707.549: 98.4076% ( 9) 00:11:23.499 18707.549 - 18826.705: 98.4873% ( 8) 00:11:23.499 18826.705 - 18945.862: 98.5868% ( 10) 00:11:23.499 18945.862 - 19065.018: 98.6863% ( 10) 00:11:23.499 19065.018 - 19184.175: 98.7261% ( 4) 00:11:23.499 25022.836 - 25141.993: 98.7460% ( 2) 00:11:23.499 25141.993 - 25261.149: 98.7659% ( 2) 00:11:23.499 25261.149 - 25380.305: 98.7958% ( 3) 00:11:23.499 25380.305 - 25499.462: 98.8256% ( 3) 00:11:23.499 25499.462 - 25618.618: 98.8455% ( 2) 00:11:23.499 25618.618 - 25737.775: 98.8754% ( 3) 00:11:23.499 25737.775 - 25856.931: 98.9053% ( 3) 00:11:23.499 25856.931 - 25976.087: 98.9351% ( 3) 00:11:23.499 25976.087 - 26095.244: 98.9550% ( 2) 00:11:23.499 26095.244 - 26214.400: 98.9849% ( 3) 00:11:23.499 26214.400 - 26333.556: 99.0147% ( 3) 00:11:23.499 26333.556 - 26452.713: 99.0446% ( 3) 00:11:23.499 26452.713 - 26571.869: 99.0645% ( 2) 00:11:23.499 26571.869 - 26691.025: 99.0943% ( 3) 00:11:23.499 26691.025 - 26810.182: 99.1242% ( 3) 00:11:23.499 26810.182 - 26929.338: 99.1541% ( 3) 00:11:23.499 26929.338 - 27048.495: 99.1939% ( 4) 00:11:23.499 27048.495 - 27167.651: 99.2237% ( 3) 00:11:23.499 27167.651 - 27286.807: 99.2536% ( 3) 00:11:23.499 27286.807 - 27405.964: 99.2834% ( 3) 00:11:23.499 27405.964 - 27525.120: 99.3133% ( 3) 00:11:23.499 27525.120 - 27644.276: 99.3531% ( 4) 00:11:23.499 27644.276 - 27763.433: 99.3631% ( 1) 00:11:23.499 33125.469 - 33363.782: 99.3730% ( 1) 00:11:23.499 33363.782 - 33602.095: 99.4228% ( 5) 00:11:23.499 33602.095 - 33840.407: 99.4924% ( 7) 00:11:23.499 33840.407 - 34078.720: 99.5521% ( 6) 00:11:23.499 34078.720 - 34317.033: 99.6119% ( 6) 00:11:23.499 34317.033 - 34555.345: 99.6616% ( 5) 00:11:23.500 34555.345 - 34793.658: 99.7213% ( 6) 00:11:23.500 34793.658 - 35031.971: 99.7811% ( 6) 00:11:23.500 35031.971 - 35270.284: 99.8408% ( 6) 00:11:23.500 35270.284 - 35508.596: 99.9005% ( 6) 00:11:23.500 35508.596 - 35746.909: 99.9602% ( 6) 00:11:23.500 35746.909 - 35985.222: 100.0000% ( 4) 00:11:23.500 00:11:23.500 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:23.500 ============================================================================== 00:11:23.500 Range in us Cumulative IO count 00:11:23.500 9651.665 - 9711.244: 0.0597% ( 6) 00:11:23.500 9711.244 - 9770.822: 0.1891% ( 13) 00:11:23.500 9770.822 - 9830.400: 0.3583% ( 17) 00:11:23.500 9830.400 - 9889.978: 0.7365% ( 38) 00:11:23.500 9889.978 - 9949.556: 1.2142% ( 48) 00:11:23.500 9949.556 - 10009.135: 1.7914% ( 58) 00:11:23.500 10009.135 - 10068.713: 2.2990% ( 51) 00:11:23.500 10068.713 - 10128.291: 3.1051% ( 81) 00:11:23.500 10128.291 - 10187.869: 4.0804% ( 98) 00:11:23.500 10187.869 - 10247.447: 5.0657% ( 99) 00:11:23.500 10247.447 - 10307.025: 6.0410% ( 98) 00:11:23.500 10307.025 - 10366.604: 6.8372% ( 80) 00:11:23.500 10366.604 - 10426.182: 7.6831% ( 85) 00:11:23.500 10426.182 - 10485.760: 8.5291% ( 85) 00:11:23.500 10485.760 - 10545.338: 9.4546% ( 93) 00:11:23.500 10545.338 - 10604.916: 10.5096% ( 106) 00:11:23.500 10604.916 - 10664.495: 11.7834% ( 128) 00:11:23.500 10664.495 - 10724.073: 13.1270% ( 135) 00:11:23.500 10724.073 - 10783.651: 14.4307% ( 131) 00:11:23.500 10783.651 - 10843.229: 15.8639% ( 144) 00:11:23.500 10843.229 - 10902.807: 17.3467% ( 149) 00:11:23.500 10902.807 - 10962.385: 18.7301% ( 139) 00:11:23.500 10962.385 - 11021.964: 20.0239% ( 130) 00:11:23.500 11021.964 - 11081.542: 21.3973% ( 138) 00:11:23.500 11081.542 - 11141.120: 22.7110% ( 132) 00:11:23.500 11141.120 - 11200.698: 24.1242% ( 142) 00:11:23.500 11200.698 - 11260.276: 25.6270% ( 151) 00:11:23.500 11260.276 - 11319.855: 27.0502% ( 143) 00:11:23.500 11319.855 - 11379.433: 28.6823% ( 164) 00:11:23.500 11379.433 - 11439.011: 30.3045% ( 163) 00:11:23.500 11439.011 - 11498.589: 31.8272% ( 153) 00:11:23.500 11498.589 - 11558.167: 33.4992% ( 168) 00:11:23.500 11558.167 - 11617.745: 35.1612% ( 167) 00:11:23.500 11617.745 - 11677.324: 37.0920% ( 194) 00:11:23.500 11677.324 - 11736.902: 38.8734% ( 179) 00:11:23.500 11736.902 - 11796.480: 40.5354% ( 167) 00:11:23.500 11796.480 - 11856.058: 42.1975% ( 167) 00:11:23.500 11856.058 - 11915.636: 43.6505% ( 146) 00:11:23.500 11915.636 - 11975.215: 45.0239% ( 138) 00:11:23.500 11975.215 - 12034.793: 46.1485% ( 113) 00:11:23.500 12034.793 - 12094.371: 47.3627% ( 122) 00:11:23.500 12094.371 - 12153.949: 48.5370% ( 118) 00:11:23.500 12153.949 - 12213.527: 49.7114% ( 118) 00:11:23.500 12213.527 - 12273.105: 50.8658% ( 116) 00:11:23.500 12273.105 - 12332.684: 52.1696% ( 131) 00:11:23.500 12332.684 - 12392.262: 53.3340% ( 117) 00:11:23.500 12392.262 - 12451.840: 54.3790% ( 105) 00:11:23.500 12451.840 - 12511.418: 55.3742% ( 100) 00:11:23.500 12511.418 - 12570.996: 56.2600% ( 89) 00:11:23.500 12570.996 - 12630.575: 57.1158% ( 86) 00:11:23.500 12630.575 - 12690.153: 57.9120% ( 80) 00:11:23.500 12690.153 - 12749.731: 58.5888% ( 68) 00:11:23.500 12749.731 - 12809.309: 59.3053% ( 72) 00:11:23.500 12809.309 - 12868.887: 59.9622% ( 66) 00:11:23.500 12868.887 - 12928.465: 60.6091% ( 65) 00:11:23.500 12928.465 - 12988.044: 61.3157% ( 71) 00:11:23.500 12988.044 - 13047.622: 62.1119% ( 80) 00:11:23.500 13047.622 - 13107.200: 63.0076% ( 90) 00:11:23.500 13107.200 - 13166.778: 63.6644% ( 66) 00:11:23.500 13166.778 - 13226.356: 64.4108% ( 75) 00:11:23.500 13226.356 - 13285.935: 65.1473% ( 74) 00:11:23.500 13285.935 - 13345.513: 65.8439% ( 70) 00:11:23.500 13345.513 - 13405.091: 66.6202% ( 78) 00:11:23.500 13405.091 - 13464.669: 67.4861% ( 87) 00:11:23.500 13464.669 - 13524.247: 68.3917% ( 91) 00:11:23.500 13524.247 - 13583.825: 69.3272% ( 94) 00:11:23.500 13583.825 - 13643.404: 70.3523% ( 103) 00:11:23.500 13643.404 - 13702.982: 71.6262% ( 128) 00:11:23.500 13702.982 - 13762.560: 72.9598% ( 134) 00:11:23.500 13762.560 - 13822.138: 74.4228% ( 147) 00:11:23.500 13822.138 - 13881.716: 75.7066% ( 129) 00:11:23.500 13881.716 - 13941.295: 76.9805% ( 128) 00:11:23.500 13941.295 - 14000.873: 78.2046% ( 123) 00:11:23.500 14000.873 - 14060.451: 79.4785% ( 128) 00:11:23.500 14060.451 - 14120.029: 80.7325% ( 126) 00:11:23.500 14120.029 - 14179.607: 81.9367% ( 121) 00:11:23.500 14179.607 - 14239.185: 83.1509% ( 122) 00:11:23.500 14239.185 - 14298.764: 84.2357% ( 109) 00:11:23.500 14298.764 - 14358.342: 85.3603% ( 113) 00:11:23.500 14358.342 - 14417.920: 86.4152% ( 106) 00:11:23.500 14417.920 - 14477.498: 87.4602% ( 105) 00:11:23.500 14477.498 - 14537.076: 88.3758% ( 92) 00:11:23.500 14537.076 - 14596.655: 89.3312% ( 96) 00:11:23.500 14596.655 - 14656.233: 90.2269% ( 90) 00:11:23.500 14656.233 - 14715.811: 91.0629% ( 84) 00:11:23.500 14715.811 - 14775.389: 91.8292% ( 77) 00:11:23.500 14775.389 - 14834.967: 92.5756% ( 75) 00:11:23.500 14834.967 - 14894.545: 93.2822% ( 71) 00:11:23.500 14894.545 - 14954.124: 93.9590% ( 68) 00:11:23.500 14954.124 - 15013.702: 94.5959% ( 64) 00:11:23.500 15013.702 - 15073.280: 95.1334% ( 54) 00:11:23.500 15073.280 - 15132.858: 95.4518% ( 32) 00:11:23.500 15132.858 - 15192.436: 95.7504% ( 30) 00:11:23.500 15192.436 - 15252.015: 95.9893% ( 24) 00:11:23.500 15252.015 - 15371.171: 96.4271% ( 44) 00:11:23.500 15371.171 - 15490.327: 96.7655% ( 34) 00:11:23.500 15490.327 - 15609.484: 97.0243% ( 26) 00:11:23.500 15609.484 - 15728.640: 97.2333% ( 21) 00:11:23.500 15728.640 - 15847.796: 97.3129% ( 8) 00:11:23.500 15847.796 - 15966.953: 97.3826% ( 7) 00:11:23.500 15966.953 - 16086.109: 97.4423% ( 6) 00:11:23.500 16086.109 - 16205.265: 97.4522% ( 1) 00:11:23.500 17635.142 - 17754.298: 97.5119% ( 6) 00:11:23.500 17754.298 - 17873.455: 97.6015% ( 9) 00:11:23.500 17873.455 - 17992.611: 97.6612% ( 6) 00:11:23.500 17992.611 - 18111.767: 97.7807% ( 12) 00:11:23.500 18111.767 - 18230.924: 97.9299% ( 15) 00:11:23.500 18230.924 - 18350.080: 98.0295% ( 10) 00:11:23.500 18350.080 - 18469.236: 98.1290% ( 10) 00:11:23.500 18469.236 - 18588.393: 98.2186% ( 9) 00:11:23.500 18588.393 - 18707.549: 98.3380% ( 12) 00:11:23.500 18707.549 - 18826.705: 98.4475% ( 11) 00:11:23.500 18826.705 - 18945.862: 98.5569% ( 11) 00:11:23.500 18945.862 - 19065.018: 98.6365% ( 8) 00:11:23.500 19065.018 - 19184.175: 98.6764% ( 4) 00:11:23.501 19184.175 - 19303.331: 98.7261% ( 5) 00:11:23.501 23116.335 - 23235.491: 98.7560% ( 3) 00:11:23.501 23235.491 - 23354.647: 98.8654% ( 11) 00:11:23.501 23354.647 - 23473.804: 98.9849% ( 12) 00:11:23.501 23473.804 - 23592.960: 99.0147% ( 3) 00:11:23.501 23592.960 - 23712.116: 99.0346% ( 2) 00:11:23.501 23712.116 - 23831.273: 99.0645% ( 3) 00:11:23.501 23831.273 - 23950.429: 99.0844% ( 2) 00:11:23.501 23950.429 - 24069.585: 99.1143% ( 3) 00:11:23.501 24069.585 - 24188.742: 99.1441% ( 3) 00:11:23.501 24188.742 - 24307.898: 99.1640% ( 2) 00:11:23.501 24307.898 - 24427.055: 99.1939% ( 3) 00:11:23.501 24427.055 - 24546.211: 99.2138% ( 2) 00:11:23.501 24546.211 - 24665.367: 99.2436% ( 3) 00:11:23.501 24665.367 - 24784.524: 99.2635% ( 2) 00:11:23.501 24784.524 - 24903.680: 99.2934% ( 3) 00:11:23.501 24903.680 - 25022.836: 99.3232% ( 3) 00:11:23.501 25022.836 - 25141.993: 99.3531% ( 3) 00:11:23.501 25141.993 - 25261.149: 99.3631% ( 1) 00:11:23.501 30742.342 - 30980.655: 99.3929% ( 3) 00:11:23.501 30980.655 - 31218.967: 99.4526% ( 6) 00:11:23.501 31218.967 - 31457.280: 99.5123% ( 6) 00:11:23.501 31457.280 - 31695.593: 99.5721% ( 6) 00:11:23.501 31695.593 - 31933.905: 99.6318% ( 6) 00:11:23.501 31933.905 - 32172.218: 99.7014% ( 7) 00:11:23.501 32172.218 - 32410.531: 99.7611% ( 6) 00:11:23.501 32410.531 - 32648.844: 99.8209% ( 6) 00:11:23.501 32648.844 - 32887.156: 99.8806% ( 6) 00:11:23.501 32887.156 - 33125.469: 99.9303% ( 5) 00:11:23.501 33125.469 - 33363.782: 99.9900% ( 6) 00:11:23.501 33363.782 - 33602.095: 100.0000% ( 1) 00:11:23.501 00:11:23.501 08:30:02 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:23.501 00:11:23.501 real 0m2.809s 00:11:23.501 user 0m2.328s 00:11:23.501 sys 0m0.357s 00:11:23.501 08:30:02 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.501 08:30:02 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:11:23.501 ************************************ 00:11:23.501 END TEST nvme_perf 00:11:23.501 ************************************ 00:11:23.501 08:30:02 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:23.501 08:30:02 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:23.501 08:30:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.501 08:30:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:23.501 ************************************ 00:11:23.501 START TEST nvme_hello_world 00:11:23.501 ************************************ 00:11:23.501 08:30:02 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:23.759 Initializing NVMe Controllers 00:11:23.759 Attached to 0000:00:10.0 00:11:23.759 Namespace ID: 1 size: 6GB 00:11:23.759 Attached to 0000:00:11.0 00:11:23.759 Namespace ID: 1 size: 5GB 00:11:23.759 Attached to 0000:00:13.0 00:11:23.759 Namespace ID: 1 size: 1GB 00:11:23.759 Attached to 0000:00:12.0 00:11:23.759 Namespace ID: 1 size: 4GB 00:11:23.759 Namespace ID: 2 size: 4GB 00:11:23.759 Namespace ID: 3 size: 4GB 00:11:23.759 Initialization complete. 00:11:23.759 INFO: using host memory buffer for IO 00:11:23.759 Hello world! 00:11:23.759 INFO: using host memory buffer for IO 00:11:23.759 Hello world! 00:11:23.759 INFO: using host memory buffer for IO 00:11:23.759 Hello world! 00:11:23.759 INFO: using host memory buffer for IO 00:11:23.759 Hello world! 00:11:23.759 INFO: using host memory buffer for IO 00:11:23.759 Hello world! 00:11:23.759 INFO: using host memory buffer for IO 00:11:23.759 Hello world! 00:11:23.759 00:11:23.759 real 0m0.350s 00:11:23.759 user 0m0.142s 00:11:23.759 sys 0m0.163s 00:11:23.759 08:30:02 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.759 08:30:02 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:23.759 ************************************ 00:11:23.759 END TEST nvme_hello_world 00:11:23.759 ************************************ 00:11:23.759 08:30:02 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:23.759 08:30:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:23.759 08:30:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.759 08:30:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:23.759 ************************************ 00:11:23.759 START TEST nvme_sgl 00:11:23.759 ************************************ 00:11:23.759 08:30:02 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:24.019 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:24.019 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:24.019 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:24.019 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:24.019 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:24.019 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:24.019 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:24.019 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:24.277 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:24.277 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:24.277 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:24.277 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:24.277 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:24.277 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:24.277 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:24.277 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:24.277 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:24.277 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:24.277 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:24.277 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:24.278 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:24.278 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:24.278 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:24.278 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:24.278 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:24.278 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:24.278 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:24.278 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:24.278 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:24.278 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:24.278 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:24.278 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:24.278 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:24.278 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:24.278 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:24.278 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:24.278 NVMe Readv/Writev Request test 00:11:24.278 Attached to 0000:00:10.0 00:11:24.278 Attached to 0000:00:11.0 00:11:24.278 Attached to 0000:00:13.0 00:11:24.278 Attached to 0000:00:12.0 00:11:24.278 0000:00:10.0: build_io_request_2 test passed 00:11:24.278 0000:00:10.0: build_io_request_4 test passed 00:11:24.278 0000:00:10.0: build_io_request_5 test passed 00:11:24.278 0000:00:10.0: build_io_request_6 test passed 00:11:24.278 0000:00:10.0: build_io_request_7 test passed 00:11:24.278 0000:00:10.0: build_io_request_10 test passed 00:11:24.278 0000:00:11.0: build_io_request_2 test passed 00:11:24.278 0000:00:11.0: build_io_request_4 test passed 00:11:24.278 0000:00:11.0: build_io_request_5 test passed 00:11:24.278 0000:00:11.0: build_io_request_6 test passed 00:11:24.278 0000:00:11.0: build_io_request_7 test passed 00:11:24.278 0000:00:11.0: build_io_request_10 test passed 00:11:24.278 Cleaning up... 00:11:24.278 00:11:24.278 real 0m0.425s 00:11:24.278 user 0m0.224s 00:11:24.278 sys 0m0.158s 00:11:24.278 08:30:03 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.278 ************************************ 00:11:24.278 END TEST nvme_sgl 00:11:24.278 ************************************ 00:11:24.278 08:30:03 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:11:24.278 08:30:03 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:24.278 08:30:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:24.278 08:30:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.278 08:30:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:24.278 ************************************ 00:11:24.278 START TEST nvme_e2edp 00:11:24.278 ************************************ 00:11:24.278 08:30:03 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:24.537 NVMe Write/Read with End-to-End data protection test 00:11:24.537 Attached to 0000:00:10.0 00:11:24.537 Attached to 0000:00:11.0 00:11:24.537 Attached to 0000:00:13.0 00:11:24.537 Attached to 0000:00:12.0 00:11:24.537 Cleaning up... 00:11:24.537 00:11:24.537 real 0m0.331s 00:11:24.537 user 0m0.129s 00:11:24.537 sys 0m0.158s 00:11:24.537 08:30:03 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.537 08:30:03 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:11:24.537 ************************************ 00:11:24.537 END TEST nvme_e2edp 00:11:24.537 ************************************ 00:11:24.537 08:30:03 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:24.537 08:30:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:24.537 08:30:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.537 08:30:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:24.796 ************************************ 00:11:24.796 START TEST nvme_reserve 00:11:24.796 ************************************ 00:11:24.796 08:30:03 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:25.054 ===================================================== 00:11:25.055 NVMe Controller at PCI bus 0, device 16, function 0 00:11:25.055 ===================================================== 00:11:25.055 Reservations: Not Supported 00:11:25.055 ===================================================== 00:11:25.055 NVMe Controller at PCI bus 0, device 17, function 0 00:11:25.055 ===================================================== 00:11:25.055 Reservations: Not Supported 00:11:25.055 ===================================================== 00:11:25.055 NVMe Controller at PCI bus 0, device 19, function 0 00:11:25.055 ===================================================== 00:11:25.055 Reservations: Not Supported 00:11:25.055 ===================================================== 00:11:25.055 NVMe Controller at PCI bus 0, device 18, function 0 00:11:25.055 ===================================================== 00:11:25.055 Reservations: Not Supported 00:11:25.055 Reservation test passed 00:11:25.055 00:11:25.055 real 0m0.339s 00:11:25.055 user 0m0.134s 00:11:25.055 sys 0m0.160s 00:11:25.055 08:30:04 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.055 08:30:04 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:25.055 ************************************ 00:11:25.055 END TEST nvme_reserve 00:11:25.055 ************************************ 00:11:25.055 08:30:04 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:25.055 08:30:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:25.055 08:30:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.055 08:30:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:25.055 ************************************ 00:11:25.055 START TEST nvme_err_injection 00:11:25.055 ************************************ 00:11:25.055 08:30:04 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:25.313 NVMe Error Injection test 00:11:25.313 Attached to 0000:00:10.0 00:11:25.313 Attached to 0000:00:11.0 00:11:25.313 Attached to 0000:00:13.0 00:11:25.313 Attached to 0000:00:12.0 00:11:25.313 0000:00:10.0: get features failed as expected 00:11:25.313 0000:00:11.0: get features failed as expected 00:11:25.313 0000:00:13.0: get features failed as expected 00:11:25.313 0000:00:12.0: get features failed as expected 00:11:25.313 0000:00:10.0: get features successfully as expected 00:11:25.314 0000:00:11.0: get features successfully as expected 00:11:25.314 0000:00:13.0: get features successfully as expected 00:11:25.314 0000:00:12.0: get features successfully as expected 00:11:25.314 0000:00:10.0: read failed as expected 00:11:25.314 0000:00:11.0: read failed as expected 00:11:25.314 0000:00:13.0: read failed as expected 00:11:25.314 0000:00:12.0: read failed as expected 00:11:25.314 0000:00:10.0: read successfully as expected 00:11:25.314 0000:00:11.0: read successfully as expected 00:11:25.314 0000:00:13.0: read successfully as expected 00:11:25.314 0000:00:12.0: read successfully as expected 00:11:25.314 Cleaning up... 00:11:25.314 00:11:25.314 real 0m0.350s 00:11:25.314 user 0m0.130s 00:11:25.314 sys 0m0.165s 00:11:25.314 08:30:04 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.314 08:30:04 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:25.314 ************************************ 00:11:25.314 END TEST nvme_err_injection 00:11:25.314 ************************************ 00:11:25.572 08:30:04 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:25.572 08:30:04 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:11:25.572 08:30:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.572 08:30:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:25.572 ************************************ 00:11:25.572 START TEST nvme_overhead 00:11:25.572 ************************************ 00:11:25.573 08:30:04 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:26.950 Initializing NVMe Controllers 00:11:26.950 Attached to 0000:00:10.0 00:11:26.950 Attached to 0000:00:11.0 00:11:26.950 Attached to 0000:00:13.0 00:11:26.950 Attached to 0000:00:12.0 00:11:26.950 Initialization complete. Launching workers. 00:11:26.950 submit (in ns) avg, min, max = 15960.4, 12841.4, 91930.9 00:11:26.950 complete (in ns) avg, min, max = 10783.0, 9520.0, 66706.4 00:11:26.950 00:11:26.950 Submit histogram 00:11:26.950 ================ 00:11:26.950 Range in us Cumulative Count 00:11:26.950 12.800 - 12.858: 0.0100% ( 1) 00:11:26.950 13.091 - 13.149: 0.0199% ( 1) 00:11:26.950 14.429 - 14.487: 0.0498% ( 3) 00:11:26.950 14.487 - 14.545: 0.1095% ( 6) 00:11:26.950 14.545 - 14.604: 0.8561% ( 75) 00:11:26.950 14.604 - 14.662: 3.5437% ( 270) 00:11:26.950 14.662 - 14.720: 9.3868% ( 587) 00:11:26.950 14.720 - 14.778: 18.3456% ( 900) 00:11:26.950 14.778 - 14.836: 28.8274% ( 1053) 00:11:26.950 14.836 - 14.895: 38.6024% ( 982) 00:11:26.950 14.895 - 15.011: 50.9457% ( 1240) 00:11:26.950 15.011 - 15.127: 58.0131% ( 710) 00:11:26.950 15.127 - 15.244: 62.9007% ( 491) 00:11:26.950 15.244 - 15.360: 66.1159% ( 323) 00:11:26.950 15.360 - 15.476: 68.0271% ( 192) 00:11:26.950 15.476 - 15.593: 69.2614% ( 124) 00:11:26.950 15.593 - 15.709: 70.3564% ( 110) 00:11:26.950 15.709 - 15.825: 71.9789% ( 163) 00:11:26.950 15.825 - 15.942: 73.3128% ( 134) 00:11:26.950 15.942 - 16.058: 74.4177% ( 111) 00:11:26.950 16.058 - 16.175: 75.0946% ( 68) 00:11:26.950 16.175 - 16.291: 75.4629% ( 37) 00:11:26.950 16.291 - 16.407: 75.7515% ( 29) 00:11:26.950 16.407 - 16.524: 75.9606% ( 21) 00:11:26.950 16.524 - 16.640: 76.0502% ( 9) 00:11:26.950 16.640 - 16.756: 76.1099% ( 6) 00:11:26.950 16.756 - 16.873: 76.1298% ( 2) 00:11:26.950 16.873 - 16.989: 76.1796% ( 5) 00:11:26.950 16.989 - 17.105: 76.2891% ( 11) 00:11:26.950 17.105 - 17.222: 76.6474% ( 36) 00:11:26.950 17.222 - 17.338: 77.8021% ( 116) 00:11:26.950 17.338 - 17.455: 79.7730% ( 198) 00:11:26.950 17.455 - 17.571: 83.2769% ( 352) 00:11:26.950 17.571 - 17.687: 87.3183% ( 406) 00:11:26.950 17.687 - 17.804: 89.6377% ( 233) 00:11:26.950 17.804 - 17.920: 90.9616% ( 133) 00:11:26.950 17.920 - 18.036: 91.9072% ( 95) 00:11:26.950 18.036 - 18.153: 92.6837% ( 78) 00:11:26.950 18.153 - 18.269: 92.9823% ( 30) 00:11:26.950 18.269 - 18.385: 93.3506% ( 37) 00:11:26.950 18.385 - 18.502: 93.7587% ( 41) 00:11:26.950 18.502 - 18.618: 94.1668% ( 41) 00:11:26.950 18.618 - 18.735: 94.5849% ( 42) 00:11:26.950 18.735 - 18.851: 94.8835% ( 30) 00:11:26.950 18.851 - 18.967: 95.1125% ( 23) 00:11:26.950 18.967 - 19.084: 95.2220% ( 11) 00:11:26.950 19.084 - 19.200: 95.3315% ( 11) 00:11:26.950 19.200 - 19.316: 95.3912% ( 6) 00:11:26.950 19.316 - 19.433: 95.4609% ( 7) 00:11:26.950 19.433 - 19.549: 95.5405% ( 8) 00:11:26.950 19.549 - 19.665: 95.5903% ( 5) 00:11:26.950 19.665 - 19.782: 95.6500% ( 6) 00:11:26.950 19.782 - 19.898: 95.6998% ( 5) 00:11:26.950 19.898 - 20.015: 95.7496% ( 5) 00:11:26.950 20.015 - 20.131: 95.8093% ( 6) 00:11:26.950 20.131 - 20.247: 95.8192% ( 1) 00:11:26.950 20.247 - 20.364: 95.8292% ( 1) 00:11:26.950 20.364 - 20.480: 95.8590% ( 3) 00:11:26.950 20.480 - 20.596: 95.8790% ( 2) 00:11:26.950 20.596 - 20.713: 95.8989% ( 2) 00:11:26.950 20.713 - 20.829: 95.9188% ( 2) 00:11:26.950 20.829 - 20.945: 95.9885% ( 7) 00:11:26.950 20.945 - 21.062: 96.0780% ( 9) 00:11:26.950 21.062 - 21.178: 96.1676% ( 9) 00:11:26.950 21.178 - 21.295: 96.2771% ( 11) 00:11:26.950 21.295 - 21.411: 96.3966% ( 12) 00:11:26.950 21.411 - 21.527: 96.5459% ( 15) 00:11:26.950 21.527 - 21.644: 96.6753% ( 13) 00:11:26.950 21.644 - 21.760: 96.8047% ( 13) 00:11:26.950 21.760 - 21.876: 96.8744% ( 7) 00:11:26.950 21.876 - 21.993: 96.9739% ( 10) 00:11:26.950 21.993 - 22.109: 97.0735% ( 10) 00:11:26.950 22.109 - 22.225: 97.1431% ( 7) 00:11:26.950 22.225 - 22.342: 97.2029% ( 6) 00:11:26.950 22.342 - 22.458: 97.2327% ( 3) 00:11:26.950 22.458 - 22.575: 97.3223% ( 9) 00:11:26.950 22.575 - 22.691: 97.3721% ( 5) 00:11:26.950 22.691 - 22.807: 97.4219% ( 5) 00:11:26.950 22.807 - 22.924: 97.4716% ( 5) 00:11:26.950 22.924 - 23.040: 97.5114% ( 4) 00:11:26.950 23.040 - 23.156: 97.5811% ( 7) 00:11:26.950 23.156 - 23.273: 97.6110% ( 3) 00:11:26.950 23.273 - 23.389: 97.7205% ( 11) 00:11:26.950 23.389 - 23.505: 97.8001% ( 8) 00:11:26.950 23.505 - 23.622: 97.8300% ( 3) 00:11:26.950 23.622 - 23.738: 97.8997% ( 7) 00:11:26.950 23.738 - 23.855: 97.9196% ( 2) 00:11:26.950 23.855 - 23.971: 97.9693% ( 5) 00:11:26.950 23.971 - 24.087: 98.0291% ( 6) 00:11:26.950 24.087 - 24.204: 98.0888% ( 6) 00:11:26.950 24.204 - 24.320: 98.1087% ( 2) 00:11:26.950 24.320 - 24.436: 98.2082% ( 10) 00:11:26.950 24.436 - 24.553: 98.2879% ( 8) 00:11:26.950 24.553 - 24.669: 98.3974% ( 11) 00:11:26.950 24.669 - 24.785: 98.4770% ( 8) 00:11:26.950 24.785 - 24.902: 98.5865% ( 11) 00:11:26.950 24.902 - 25.018: 98.6562% ( 7) 00:11:26.950 25.018 - 25.135: 98.7159% ( 6) 00:11:26.950 25.135 - 25.251: 98.7955% ( 8) 00:11:26.950 25.251 - 25.367: 98.9548% ( 16) 00:11:26.950 25.367 - 25.484: 98.9648% ( 1) 00:11:26.950 25.484 - 25.600: 98.9946% ( 3) 00:11:26.950 25.600 - 25.716: 99.0444% ( 5) 00:11:26.950 25.716 - 25.833: 99.0942% ( 5) 00:11:26.950 25.833 - 25.949: 99.1041% ( 1) 00:11:26.950 25.949 - 26.065: 99.1738% ( 7) 00:11:26.950 26.065 - 26.182: 99.2236% ( 5) 00:11:26.950 26.182 - 26.298: 99.2435% ( 2) 00:11:26.950 26.298 - 26.415: 99.2733% ( 3) 00:11:26.950 26.415 - 26.531: 99.2833% ( 1) 00:11:26.950 26.531 - 26.647: 99.2933% ( 1) 00:11:26.950 26.647 - 26.764: 99.3132% ( 2) 00:11:26.950 26.764 - 26.880: 99.3231% ( 1) 00:11:26.950 26.880 - 26.996: 99.3430% ( 2) 00:11:26.950 26.996 - 27.113: 99.3530% ( 1) 00:11:26.950 27.113 - 27.229: 99.4127% ( 6) 00:11:26.950 27.229 - 27.345: 99.4426% ( 3) 00:11:26.950 27.462 - 27.578: 99.4525% ( 1) 00:11:26.950 27.578 - 27.695: 99.4923% ( 4) 00:11:26.950 27.695 - 27.811: 99.5222% ( 3) 00:11:26.950 27.811 - 27.927: 99.5620% ( 4) 00:11:26.950 27.927 - 28.044: 99.6018% ( 4) 00:11:26.950 28.044 - 28.160: 99.6118% ( 1) 00:11:26.950 28.160 - 28.276: 99.6317% ( 2) 00:11:26.950 28.276 - 28.393: 99.6416% ( 1) 00:11:26.950 28.393 - 28.509: 99.6616% ( 2) 00:11:26.950 28.509 - 28.625: 99.6914% ( 3) 00:11:26.950 28.625 - 28.742: 99.7014% ( 1) 00:11:26.950 28.742 - 28.858: 99.7113% ( 1) 00:11:26.950 28.858 - 28.975: 99.7312% ( 2) 00:11:26.950 28.975 - 29.091: 99.7611% ( 3) 00:11:26.950 29.091 - 29.207: 99.7711% ( 1) 00:11:26.950 29.207 - 29.324: 99.7810% ( 1) 00:11:26.950 29.556 - 29.673: 99.7910% ( 1) 00:11:26.950 29.789 - 30.022: 99.8009% ( 1) 00:11:26.950 30.255 - 30.487: 99.8109% ( 1) 00:11:26.950 30.487 - 30.720: 99.8208% ( 1) 00:11:26.950 30.720 - 30.953: 99.8308% ( 1) 00:11:26.950 31.185 - 31.418: 99.8407% ( 1) 00:11:26.950 31.651 - 31.884: 99.8507% ( 1) 00:11:26.950 33.978 - 34.211: 99.8606% ( 1) 00:11:26.950 34.211 - 34.444: 99.8706% ( 1) 00:11:26.950 36.073 - 36.305: 99.8805% ( 1) 00:11:26.950 37.236 - 37.469: 99.9005% ( 2) 00:11:26.950 37.469 - 37.702: 99.9104% ( 1) 00:11:26.950 37.702 - 37.935: 99.9204% ( 1) 00:11:26.950 37.935 - 38.167: 99.9303% ( 1) 00:11:26.950 38.167 - 38.400: 99.9403% ( 1) 00:11:26.950 40.029 - 40.262: 99.9502% ( 1) 00:11:26.950 41.193 - 41.425: 99.9602% ( 1) 00:11:26.950 42.124 - 42.356: 99.9701% ( 1) 00:11:26.950 48.407 - 48.640: 99.9801% ( 1) 00:11:26.950 67.491 - 67.956: 99.9900% ( 1) 00:11:26.950 91.695 - 92.160: 100.0000% ( 1) 00:11:26.950 00:11:26.950 Complete histogram 00:11:26.950 ================== 00:11:26.950 Range in us Cumulative Count 00:11:26.950 9.484 - 9.542: 0.0100% ( 1) 00:11:26.950 9.600 - 9.658: 0.5276% ( 52) 00:11:26.950 9.658 - 9.716: 4.7382% ( 423) 00:11:26.950 9.716 - 9.775: 15.7476% ( 1106) 00:11:26.950 9.775 - 9.833: 30.1812% ( 1450) 00:11:26.950 9.833 - 9.891: 43.4402% ( 1332) 00:11:26.951 9.891 - 9.949: 53.0659% ( 967) 00:11:26.951 9.949 - 10.007: 58.7099% ( 567) 00:11:26.951 10.007 - 10.065: 61.7758% ( 308) 00:11:26.951 10.065 - 10.124: 63.3785% ( 161) 00:11:26.951 10.124 - 10.182: 64.2544% ( 88) 00:11:26.951 10.182 - 10.240: 64.7322% ( 48) 00:11:26.951 10.240 - 10.298: 65.1204% ( 39) 00:11:26.951 10.298 - 10.356: 65.3096% ( 19) 00:11:26.951 10.356 - 10.415: 65.6381% ( 33) 00:11:26.951 10.415 - 10.473: 66.0960% ( 46) 00:11:26.951 10.473 - 10.531: 66.5240% ( 43) 00:11:26.951 10.531 - 10.589: 67.0814% ( 56) 00:11:26.951 10.589 - 10.647: 67.8479% ( 77) 00:11:26.951 10.647 - 10.705: 68.6243% ( 78) 00:11:26.951 10.705 - 10.764: 69.4605% ( 84) 00:11:26.951 10.764 - 10.822: 70.2867% ( 83) 00:11:26.951 10.822 - 10.880: 70.8740% ( 59) 00:11:26.951 10.880 - 10.938: 71.2622% ( 39) 00:11:26.951 10.938 - 10.996: 71.5011% ( 24) 00:11:26.951 10.996 - 11.055: 71.6803% ( 18) 00:11:26.951 11.055 - 11.113: 71.7599% ( 8) 00:11:26.951 11.113 - 11.171: 71.7898% ( 3) 00:11:26.951 11.171 - 11.229: 71.8694% ( 8) 00:11:26.951 11.229 - 11.287: 71.9689% ( 10) 00:11:26.951 11.287 - 11.345: 72.0187% ( 5) 00:11:26.951 11.345 - 11.404: 72.0784% ( 6) 00:11:26.951 11.404 - 11.462: 72.1979% ( 12) 00:11:26.951 11.462 - 11.520: 72.2576% ( 6) 00:11:26.951 11.520 - 11.578: 72.3671% ( 11) 00:11:26.951 11.578 - 11.636: 72.4268% ( 6) 00:11:26.951 11.636 - 11.695: 72.4866% ( 6) 00:11:26.951 11.695 - 11.753: 73.0340% ( 55) 00:11:26.951 11.753 - 11.811: 74.7362% ( 171) 00:11:26.951 11.811 - 11.869: 78.0012% ( 328) 00:11:26.951 11.869 - 11.927: 82.2317% ( 425) 00:11:26.951 11.927 - 11.985: 85.2578% ( 304) 00:11:26.951 11.985 - 12.044: 87.1591% ( 191) 00:11:26.951 12.044 - 12.102: 88.0649% ( 91) 00:11:26.951 12.102 - 12.160: 88.7119% ( 65) 00:11:26.951 12.160 - 12.218: 89.0603% ( 35) 00:11:26.951 12.218 - 12.276: 89.4187% ( 36) 00:11:26.951 12.276 - 12.335: 89.7969% ( 38) 00:11:26.951 12.335 - 12.393: 90.2747% ( 48) 00:11:26.951 12.393 - 12.451: 90.9317% ( 66) 00:11:26.951 12.451 - 12.509: 91.4692% ( 54) 00:11:26.951 12.509 - 12.567: 91.9470% ( 48) 00:11:26.951 12.567 - 12.625: 92.3751% ( 43) 00:11:26.951 12.625 - 12.684: 92.8529% ( 48) 00:11:26.951 12.684 - 12.742: 93.3506% ( 50) 00:11:26.951 12.742 - 12.800: 93.8085% ( 46) 00:11:26.951 12.800 - 12.858: 94.4057% ( 60) 00:11:26.951 12.858 - 12.916: 94.7143% ( 31) 00:11:26.951 12.916 - 12.975: 95.0229% ( 31) 00:11:26.951 12.975 - 13.033: 95.3016% ( 28) 00:11:26.951 13.033 - 13.091: 95.5107% ( 21) 00:11:26.951 13.091 - 13.149: 95.6500% ( 14) 00:11:26.951 13.149 - 13.207: 95.7197% ( 7) 00:11:26.951 13.207 - 13.265: 95.7894% ( 7) 00:11:26.951 13.265 - 13.324: 95.8590% ( 7) 00:11:26.951 13.324 - 13.382: 95.8889% ( 3) 00:11:26.951 13.382 - 13.440: 95.8989% ( 1) 00:11:26.951 13.440 - 13.498: 95.9885% ( 9) 00:11:26.951 13.498 - 13.556: 96.0482% ( 6) 00:11:26.951 13.556 - 13.615: 96.0780% ( 3) 00:11:26.951 13.615 - 13.673: 96.0979% ( 2) 00:11:26.951 13.673 - 13.731: 96.1378% ( 4) 00:11:26.951 13.789 - 13.847: 96.1776% ( 4) 00:11:26.951 13.847 - 13.905: 96.1875% ( 1) 00:11:26.951 13.905 - 13.964: 96.1975% ( 1) 00:11:26.951 13.964 - 14.022: 96.2074% ( 1) 00:11:26.951 14.022 - 14.080: 96.2572% ( 5) 00:11:26.951 14.080 - 14.138: 96.2672% ( 1) 00:11:26.951 14.138 - 14.196: 96.3070% ( 4) 00:11:26.951 14.196 - 14.255: 96.3169% ( 1) 00:11:26.951 14.255 - 14.313: 96.3369% ( 2) 00:11:26.951 14.371 - 14.429: 96.3866% ( 5) 00:11:26.951 14.429 - 14.487: 96.4165% ( 3) 00:11:26.951 14.487 - 14.545: 96.4364% ( 2) 00:11:26.951 14.545 - 14.604: 96.4862% ( 5) 00:11:26.951 14.604 - 14.662: 96.6056% ( 12) 00:11:26.951 14.662 - 14.720: 96.6952% ( 9) 00:11:26.951 14.720 - 14.778: 96.7649% ( 7) 00:11:26.951 14.778 - 14.836: 96.9142% ( 15) 00:11:26.951 14.836 - 14.895: 96.9938% ( 8) 00:11:26.951 14.895 - 15.011: 97.1929% ( 20) 00:11:26.951 15.011 - 15.127: 97.3024% ( 11) 00:11:26.951 15.127 - 15.244: 97.3721% ( 7) 00:11:26.951 15.244 - 15.360: 97.4020% ( 3) 00:11:26.951 15.360 - 15.476: 97.4119% ( 1) 00:11:26.951 15.476 - 15.593: 97.4517% ( 4) 00:11:26.951 15.593 - 15.709: 97.4716% ( 2) 00:11:26.951 15.709 - 15.825: 97.5214% ( 5) 00:11:26.951 15.825 - 15.942: 97.5811% ( 6) 00:11:26.951 15.942 - 16.058: 97.6010% ( 2) 00:11:26.951 16.058 - 16.175: 97.6508% ( 5) 00:11:26.951 16.175 - 16.291: 97.6807% ( 3) 00:11:26.951 16.291 - 16.407: 97.7503% ( 7) 00:11:26.951 16.407 - 16.524: 97.7902% ( 4) 00:11:26.951 16.524 - 16.640: 97.8101% ( 2) 00:11:26.951 16.640 - 16.756: 97.8300% ( 2) 00:11:26.951 16.756 - 16.873: 97.8399% ( 1) 00:11:26.951 16.873 - 16.989: 97.8698% ( 3) 00:11:26.951 16.989 - 17.105: 97.9295% ( 6) 00:11:26.951 17.105 - 17.222: 97.9395% ( 1) 00:11:26.951 17.222 - 17.338: 97.9992% ( 6) 00:11:26.951 17.338 - 17.455: 98.0589% ( 6) 00:11:26.951 17.455 - 17.571: 98.1087% ( 5) 00:11:26.951 17.571 - 17.687: 98.1684% ( 6) 00:11:26.951 17.687 - 17.804: 98.2282% ( 6) 00:11:26.951 17.804 - 17.920: 98.2879% ( 6) 00:11:26.951 17.920 - 18.036: 98.3576% ( 7) 00:11:26.951 18.036 - 18.153: 98.4173% ( 6) 00:11:26.951 18.153 - 18.269: 98.4671% ( 5) 00:11:26.951 18.269 - 18.385: 98.4870% ( 2) 00:11:26.951 18.385 - 18.502: 98.5268% ( 4) 00:11:26.951 18.502 - 18.618: 98.5467% ( 2) 00:11:26.951 18.618 - 18.735: 98.5666% ( 2) 00:11:26.951 18.735 - 18.851: 98.6661% ( 10) 00:11:26.951 18.851 - 18.967: 98.7060% ( 4) 00:11:26.951 18.967 - 19.084: 98.7657% ( 6) 00:11:26.951 19.084 - 19.200: 98.8055% ( 4) 00:11:26.951 19.200 - 19.316: 98.8652% ( 6) 00:11:26.951 19.316 - 19.433: 98.8951% ( 3) 00:11:26.951 19.433 - 19.549: 98.9648% ( 7) 00:11:26.951 19.549 - 19.665: 99.0046% ( 4) 00:11:26.951 19.665 - 19.782: 99.0245% ( 2) 00:11:26.951 19.782 - 19.898: 99.0344% ( 1) 00:11:26.951 19.898 - 20.015: 99.0942% ( 6) 00:11:26.951 20.015 - 20.131: 99.1638% ( 7) 00:11:26.951 20.131 - 20.247: 99.2136% ( 5) 00:11:26.951 20.247 - 20.364: 99.2335% ( 2) 00:11:26.951 20.364 - 20.480: 99.2634% ( 3) 00:11:26.951 20.480 - 20.596: 99.2733% ( 1) 00:11:26.951 20.829 - 20.945: 99.2833% ( 1) 00:11:26.951 20.945 - 21.062: 99.3231% ( 4) 00:11:26.951 21.062 - 21.178: 99.3530% ( 3) 00:11:26.951 21.178 - 21.295: 99.4027% ( 5) 00:11:26.951 21.295 - 21.411: 99.4326% ( 3) 00:11:26.951 21.411 - 21.527: 99.5122% ( 8) 00:11:26.951 21.644 - 21.760: 99.5322% ( 2) 00:11:26.951 21.760 - 21.876: 99.5521% ( 2) 00:11:26.951 21.876 - 21.993: 99.5720% ( 2) 00:11:26.951 21.993 - 22.109: 99.5919% ( 2) 00:11:26.951 22.109 - 22.225: 99.6118% ( 2) 00:11:26.951 22.225 - 22.342: 99.6217% ( 1) 00:11:26.951 22.458 - 22.575: 99.6317% ( 1) 00:11:26.951 22.575 - 22.691: 99.6416% ( 1) 00:11:26.951 22.691 - 22.807: 99.6715% ( 3) 00:11:26.951 22.807 - 22.924: 99.6914% ( 2) 00:11:26.951 22.924 - 23.040: 99.7213% ( 3) 00:11:26.951 23.040 - 23.156: 99.7312% ( 1) 00:11:26.951 23.273 - 23.389: 99.7511% ( 2) 00:11:26.951 23.389 - 23.505: 99.7810% ( 3) 00:11:26.951 23.505 - 23.622: 99.8009% ( 2) 00:11:26.951 24.087 - 24.204: 99.8109% ( 1) 00:11:26.951 24.902 - 25.018: 99.8208% ( 1) 00:11:26.951 25.135 - 25.251: 99.8308% ( 1) 00:11:26.951 25.600 - 25.716: 99.8407% ( 1) 00:11:26.951 25.833 - 25.949: 99.8507% ( 1) 00:11:26.951 26.065 - 26.182: 99.8606% ( 1) 00:11:26.951 26.182 - 26.298: 99.8706% ( 1) 00:11:26.952 26.298 - 26.415: 99.8805% ( 1) 00:11:26.952 26.764 - 26.880: 99.8905% ( 1) 00:11:26.952 28.044 - 28.160: 99.9104% ( 2) 00:11:26.952 28.625 - 28.742: 99.9204% ( 1) 00:11:26.952 28.742 - 28.858: 99.9303% ( 1) 00:11:26.952 29.091 - 29.207: 99.9403% ( 1) 00:11:26.952 29.324 - 29.440: 99.9502% ( 1) 00:11:26.952 29.556 - 29.673: 99.9602% ( 1) 00:11:26.952 30.022 - 30.255: 99.9701% ( 1) 00:11:26.952 31.418 - 31.651: 99.9801% ( 1) 00:11:26.952 35.840 - 36.073: 99.9900% ( 1) 00:11:26.952 66.560 - 67.025: 100.0000% ( 1) 00:11:26.952 00:11:26.952 00:11:26.952 real 0m1.332s 00:11:26.952 user 0m1.128s 00:11:26.952 sys 0m0.156s 00:11:26.952 08:30:05 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.952 08:30:05 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:26.952 ************************************ 00:11:26.952 END TEST nvme_overhead 00:11:26.952 ************************************ 00:11:26.952 08:30:05 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:26.952 08:30:05 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:26.952 08:30:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.952 08:30:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:26.952 ************************************ 00:11:26.952 START TEST nvme_arbitration 00:11:26.952 ************************************ 00:11:26.952 08:30:05 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:30.237 Initializing NVMe Controllers 00:11:30.237 Attached to 0000:00:10.0 00:11:30.237 Attached to 0000:00:11.0 00:11:30.237 Attached to 0000:00:13.0 00:11:30.237 Attached to 0000:00:12.0 00:11:30.237 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:30.237 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:30.237 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:30.237 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:30.237 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:30.237 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:30.237 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:30.237 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:30.237 Initialization complete. Launching workers. 00:11:30.237 Starting thread on core 1 with urgent priority queue 00:11:30.237 Starting thread on core 2 with urgent priority queue 00:11:30.238 Starting thread on core 3 with urgent priority queue 00:11:30.238 Starting thread on core 0 with urgent priority queue 00:11:30.238 QEMU NVMe Ctrl (12340 ) core 0: 768.00 IO/s 130.21 secs/100000 ios 00:11:30.238 QEMU NVMe Ctrl (12342 ) core 0: 768.00 IO/s 130.21 secs/100000 ios 00:11:30.238 QEMU NVMe Ctrl (12341 ) core 1: 789.33 IO/s 126.69 secs/100000 ios 00:11:30.238 QEMU NVMe Ctrl (12342 ) core 1: 789.33 IO/s 126.69 secs/100000 ios 00:11:30.238 QEMU NVMe Ctrl (12343 ) core 2: 597.33 IO/s 167.41 secs/100000 ios 00:11:30.238 QEMU NVMe Ctrl (12342 ) core 3: 554.67 IO/s 180.29 secs/100000 ios 00:11:30.238 ======================================================== 00:11:30.238 00:11:30.238 00:11:30.238 real 0m3.429s 00:11:30.238 user 0m9.363s 00:11:30.238 sys 0m0.163s 00:11:30.238 08:30:09 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.238 ************************************ 00:11:30.238 END TEST nvme_arbitration 00:11:30.238 ************************************ 00:11:30.238 08:30:09 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:30.238 08:30:09 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:30.238 08:30:09 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:30.238 08:30:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.238 08:30:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:30.238 ************************************ 00:11:30.238 START TEST nvme_single_aen 00:11:30.238 ************************************ 00:11:30.238 08:30:09 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:30.496 Asynchronous Event Request test 00:11:30.496 Attached to 0000:00:10.0 00:11:30.496 Attached to 0000:00:11.0 00:11:30.496 Attached to 0000:00:13.0 00:11:30.496 Attached to 0000:00:12.0 00:11:30.496 Reset controller to setup AER completions for this process 00:11:30.496 Registering asynchronous event callbacks... 00:11:30.496 Getting orig temperature thresholds of all controllers 00:11:30.496 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:30.496 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:30.496 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:30.496 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:30.496 Setting all controllers temperature threshold low to trigger AER 00:11:30.496 Waiting for all controllers temperature threshold to be set lower 00:11:30.496 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:30.496 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:30.496 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:30.496 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:30.496 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:30.496 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:30.496 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:30.496 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:30.496 Waiting for all controllers to trigger AER and reset threshold 00:11:30.496 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:30.496 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:30.496 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:30.496 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:30.496 Cleaning up... 00:11:30.756 00:11:30.756 real 0m0.318s 00:11:30.756 user 0m0.123s 00:11:30.756 sys 0m0.154s 00:11:30.756 08:30:09 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.756 08:30:09 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:30.756 ************************************ 00:11:30.756 END TEST nvme_single_aen 00:11:30.756 ************************************ 00:11:30.756 08:30:09 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:30.756 08:30:09 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:30.756 08:30:09 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.756 08:30:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:30.756 ************************************ 00:11:30.756 START TEST nvme_doorbell_aers 00:11:30.756 ************************************ 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:30.756 08:30:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:31.015 [2024-11-19 08:30:10.237895] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:11:41.006 Executing: test_write_invalid_db 00:11:41.006 Waiting for AER completion... 00:11:41.006 Failure: test_write_invalid_db 00:11:41.006 00:11:41.006 Executing: test_invalid_db_write_overflow_sq 00:11:41.006 Waiting for AER completion... 00:11:41.006 Failure: test_invalid_db_write_overflow_sq 00:11:41.006 00:11:41.006 Executing: test_invalid_db_write_overflow_cq 00:11:41.006 Waiting for AER completion... 00:11:41.006 Failure: test_invalid_db_write_overflow_cq 00:11:41.006 00:11:41.006 08:30:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:41.006 08:30:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:41.006 [2024-11-19 08:30:20.292039] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:11:50.976 Executing: test_write_invalid_db 00:11:50.976 Waiting for AER completion... 00:11:50.976 Failure: test_write_invalid_db 00:11:50.976 00:11:50.976 Executing: test_invalid_db_write_overflow_sq 00:11:50.976 Waiting for AER completion... 00:11:50.976 Failure: test_invalid_db_write_overflow_sq 00:11:50.976 00:11:50.976 Executing: test_invalid_db_write_overflow_cq 00:11:50.976 Waiting for AER completion... 00:11:50.976 Failure: test_invalid_db_write_overflow_cq 00:11:50.976 00:11:50.976 08:30:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:50.976 08:30:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:51.235 [2024-11-19 08:30:30.320271] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:01.205 Executing: test_write_invalid_db 00:12:01.205 Waiting for AER completion... 00:12:01.205 Failure: test_write_invalid_db 00:12:01.205 00:12:01.205 Executing: test_invalid_db_write_overflow_sq 00:12:01.205 Waiting for AER completion... 00:12:01.205 Failure: test_invalid_db_write_overflow_sq 00:12:01.205 00:12:01.205 Executing: test_invalid_db_write_overflow_cq 00:12:01.205 Waiting for AER completion... 00:12:01.205 Failure: test_invalid_db_write_overflow_cq 00:12:01.205 00:12:01.205 08:30:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:01.205 08:30:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:01.205 [2024-11-19 08:30:40.358571] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:11.215 Executing: test_write_invalid_db 00:12:11.215 Waiting for AER completion... 00:12:11.215 Failure: test_write_invalid_db 00:12:11.215 00:12:11.215 Executing: test_invalid_db_write_overflow_sq 00:12:11.215 Waiting for AER completion... 00:12:11.215 Failure: test_invalid_db_write_overflow_sq 00:12:11.215 00:12:11.215 Executing: test_invalid_db_write_overflow_cq 00:12:11.215 Waiting for AER completion... 00:12:11.215 Failure: test_invalid_db_write_overflow_cq 00:12:11.215 00:12:11.215 00:12:11.215 real 0m40.264s 00:12:11.215 user 0m34.126s 00:12:11.215 sys 0m5.733s 00:12:11.215 08:30:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.215 08:30:50 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:11.215 ************************************ 00:12:11.215 END TEST nvme_doorbell_aers 00:12:11.215 ************************************ 00:12:11.215 08:30:50 nvme -- nvme/nvme.sh@97 -- # uname 00:12:11.215 08:30:50 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:11.215 08:30:50 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:11.215 08:30:50 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:11.215 08:30:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.215 08:30:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:11.215 ************************************ 00:12:11.215 START TEST nvme_multi_aen 00:12:11.215 ************************************ 00:12:11.215 08:30:50 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:11.216 [2024-11-19 08:30:50.463693] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:11.216 [2024-11-19 08:30:50.463822] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:11.216 [2024-11-19 08:30:50.463857] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:11.216 [2024-11-19 08:30:50.465946] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:11.216 [2024-11-19 08:30:50.466009] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:11.216 [2024-11-19 08:30:50.466034] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:11.216 [2024-11-19 08:30:50.467654] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:11.216 [2024-11-19 08:30:50.467714] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:11.216 [2024-11-19 08:30:50.467734] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:11.216 [2024-11-19 08:30:50.469385] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:11.216 [2024-11-19 08:30:50.469442] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:11.216 [2024-11-19 08:30:50.469463] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64909) is not found. Dropping the request. 00:12:11.216 Child process pid: 65420 00:12:11.474 [Child] Asynchronous Event Request test 00:12:11.474 [Child] Attached to 0000:00:10.0 00:12:11.474 [Child] Attached to 0000:00:11.0 00:12:11.474 [Child] Attached to 0000:00:13.0 00:12:11.474 [Child] Attached to 0000:00:12.0 00:12:11.474 [Child] Registering asynchronous event callbacks... 00:12:11.474 [Child] Getting orig temperature thresholds of all controllers 00:12:11.474 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:11.474 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:11.474 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:11.474 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:11.474 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:11.474 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:11.474 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:11.474 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:11.474 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:11.474 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:11.474 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:11.474 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:11.474 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:11.474 [Child] Cleaning up... 00:12:11.733 Asynchronous Event Request test 00:12:11.733 Attached to 0000:00:10.0 00:12:11.733 Attached to 0000:00:11.0 00:12:11.733 Attached to 0000:00:13.0 00:12:11.733 Attached to 0000:00:12.0 00:12:11.733 Reset controller to setup AER completions for this process 00:12:11.733 Registering asynchronous event callbacks... 00:12:11.733 Getting orig temperature thresholds of all controllers 00:12:11.733 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:11.733 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:11.733 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:11.733 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:11.733 Setting all controllers temperature threshold low to trigger AER 00:12:11.733 Waiting for all controllers temperature threshold to be set lower 00:12:11.733 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:11.733 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:11.733 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:11.733 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:11.733 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:11.733 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:11.733 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:11.733 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:11.733 Waiting for all controllers to trigger AER and reset threshold 00:12:11.733 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:11.733 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:11.733 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:11.733 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:11.733 Cleaning up... 00:12:11.733 00:12:11.733 real 0m0.655s 00:12:11.733 user 0m0.215s 00:12:11.733 sys 0m0.330s 00:12:11.733 08:30:50 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.733 08:30:50 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:11.733 ************************************ 00:12:11.733 END TEST nvme_multi_aen 00:12:11.733 ************************************ 00:12:11.733 08:30:50 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:11.733 08:30:50 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:11.733 08:30:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.733 08:30:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:11.733 ************************************ 00:12:11.733 START TEST nvme_startup 00:12:11.733 ************************************ 00:12:11.733 08:30:50 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:11.992 Initializing NVMe Controllers 00:12:11.992 Attached to 0000:00:10.0 00:12:11.992 Attached to 0000:00:11.0 00:12:11.993 Attached to 0000:00:13.0 00:12:11.993 Attached to 0000:00:12.0 00:12:11.993 Initialization complete. 00:12:11.993 Time used:208777.266 (us). 00:12:11.993 00:12:11.993 real 0m0.293s 00:12:11.993 user 0m0.115s 00:12:11.993 sys 0m0.135s 00:12:11.993 08:30:51 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.993 ************************************ 00:12:11.993 END TEST nvme_startup 00:12:11.993 ************************************ 00:12:11.993 08:30:51 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:11.993 08:30:51 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:11.993 08:30:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:11.993 08:30:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.993 08:30:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:11.993 ************************************ 00:12:11.993 START TEST nvme_multi_secondary 00:12:11.993 ************************************ 00:12:11.993 08:30:51 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:12:11.993 08:30:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65476 00:12:11.993 08:30:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:11.993 08:30:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65477 00:12:11.993 08:30:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:11.993 08:30:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:15.276 Initializing NVMe Controllers 00:12:15.276 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:15.276 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:15.276 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:15.276 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:15.276 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:15.276 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:15.276 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:15.276 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:15.276 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:15.276 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:15.276 Initialization complete. Launching workers. 00:12:15.276 ======================================================== 00:12:15.276 Latency(us) 00:12:15.276 Device Information : IOPS MiB/s Average min max 00:12:15.276 PCIE (0000:00:10.0) NSID 1 from core 2: 2339.65 9.14 6835.89 1002.03 15064.37 00:12:15.276 PCIE (0000:00:11.0) NSID 1 from core 2: 2339.65 9.14 6838.48 1023.73 14304.83 00:12:15.276 PCIE (0000:00:13.0) NSID 1 from core 2: 2339.65 9.14 6839.40 1016.40 16440.88 00:12:15.276 PCIE (0000:00:12.0) NSID 1 from core 2: 2334.32 9.12 6851.00 1002.62 15135.17 00:12:15.276 PCIE (0000:00:12.0) NSID 2 from core 2: 2334.32 9.12 6855.53 1008.65 15716.25 00:12:15.276 PCIE (0000:00:12.0) NSID 3 from core 2: 2334.32 9.12 6856.21 978.33 15343.37 00:12:15.276 ======================================================== 00:12:15.276 Total : 14021.90 54.77 6846.08 978.33 16440.88 00:12:15.276 00:12:15.534 08:30:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65476 00:12:15.534 Initializing NVMe Controllers 00:12:15.534 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:15.534 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:15.534 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:15.534 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:15.534 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:15.534 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:15.534 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:15.534 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:15.534 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:15.534 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:15.534 Initialization complete. Launching workers. 00:12:15.534 ======================================================== 00:12:15.534 Latency(us) 00:12:15.534 Device Information : IOPS MiB/s Average min max 00:12:15.534 PCIE (0000:00:10.0) NSID 1 from core 1: 5101.73 19.93 3134.28 1245.74 7711.96 00:12:15.534 PCIE (0000:00:11.0) NSID 1 from core 1: 5101.73 19.93 3135.69 1168.31 7788.16 00:12:15.534 PCIE (0000:00:13.0) NSID 1 from core 1: 5101.73 19.93 3135.63 1259.23 8077.83 00:12:15.534 PCIE (0000:00:12.0) NSID 1 from core 1: 5101.73 19.93 3135.67 1269.20 8193.82 00:12:15.534 PCIE (0000:00:12.0) NSID 2 from core 1: 5101.73 19.93 3135.66 1262.57 8647.21 00:12:15.534 PCIE (0000:00:12.0) NSID 3 from core 1: 5101.73 19.93 3135.60 1306.58 8833.74 00:12:15.534 ======================================================== 00:12:15.534 Total : 30610.38 119.57 3135.42 1168.31 8833.74 00:12:15.534 00:12:17.433 Initializing NVMe Controllers 00:12:17.433 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:17.433 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:17.433 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:17.433 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:17.433 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:17.433 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:17.433 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:17.433 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:17.433 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:17.433 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:17.433 Initialization complete. Launching workers. 00:12:17.433 ======================================================== 00:12:17.433 Latency(us) 00:12:17.433 Device Information : IOPS MiB/s Average min max 00:12:17.433 PCIE (0000:00:10.0) NSID 1 from core 0: 7652.92 29.89 2089.07 960.41 14202.66 00:12:17.433 PCIE (0000:00:11.0) NSID 1 from core 0: 7652.92 29.89 2090.17 973.88 14681.80 00:12:17.433 PCIE (0000:00:13.0) NSID 1 from core 0: 7652.92 29.89 2090.12 943.37 14666.63 00:12:17.433 PCIE (0000:00:12.0) NSID 1 from core 0: 7652.92 29.89 2090.06 908.93 14622.92 00:12:17.433 PCIE (0000:00:12.0) NSID 2 from core 0: 7652.92 29.89 2090.02 883.87 14656.36 00:12:17.433 PCIE (0000:00:12.0) NSID 3 from core 0: 7652.92 29.89 2089.96 845.39 14642.21 00:12:17.433 ======================================================== 00:12:17.433 Total : 45917.54 179.37 2089.90 845.39 14681.80 00:12:17.433 00:12:17.433 08:30:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65477 00:12:17.433 08:30:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65552 00:12:17.433 08:30:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:17.433 08:30:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65553 00:12:17.433 08:30:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:17.433 08:30:56 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:21.618 Initializing NVMe Controllers 00:12:21.618 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:21.618 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:21.618 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:21.618 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:21.618 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:21.618 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:21.618 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:21.618 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:21.618 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:21.618 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:21.618 Initialization complete. Launching workers. 00:12:21.618 ======================================================== 00:12:21.618 Latency(us) 00:12:21.618 Device Information : IOPS MiB/s Average min max 00:12:21.618 PCIE (0000:00:10.0) NSID 1 from core 1: 5449.63 21.29 2934.13 976.18 7303.92 00:12:21.618 PCIE (0000:00:11.0) NSID 1 from core 1: 5449.63 21.29 2935.78 985.53 7230.72 00:12:21.618 PCIE (0000:00:13.0) NSID 1 from core 1: 5449.63 21.29 2936.01 1023.18 7703.52 00:12:21.618 PCIE (0000:00:12.0) NSID 1 from core 1: 5449.63 21.29 2936.25 1030.43 7072.67 00:12:21.618 PCIE (0000:00:12.0) NSID 2 from core 1: 5449.63 21.29 2936.56 1020.82 7129.60 00:12:21.618 PCIE (0000:00:12.0) NSID 3 from core 1: 5449.63 21.29 2936.71 1004.70 7532.56 00:12:21.618 ======================================================== 00:12:21.618 Total : 32697.75 127.73 2935.90 976.18 7703.52 00:12:21.618 00:12:21.618 Initializing NVMe Controllers 00:12:21.618 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:21.618 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:21.618 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:21.618 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:21.618 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:21.618 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:21.618 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:21.618 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:21.618 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:21.618 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:21.618 Initialization complete. Launching workers. 00:12:21.618 ======================================================== 00:12:21.618 Latency(us) 00:12:21.618 Device Information : IOPS MiB/s Average min max 00:12:21.618 PCIE (0000:00:10.0) NSID 1 from core 0: 5260.65 20.55 3039.41 1078.63 12902.92 00:12:21.618 PCIE (0000:00:11.0) NSID 1 from core 0: 5260.65 20.55 3040.74 1114.77 12992.52 00:12:21.618 PCIE (0000:00:13.0) NSID 1 from core 0: 5260.65 20.55 3040.64 1109.95 13049.65 00:12:21.618 PCIE (0000:00:12.0) NSID 1 from core 0: 5260.65 20.55 3040.53 1113.18 13259.21 00:12:21.618 PCIE (0000:00:12.0) NSID 2 from core 0: 5260.65 20.55 3040.46 1075.95 13646.26 00:12:21.618 PCIE (0000:00:12.0) NSID 3 from core 0: 5260.65 20.55 3040.35 1079.19 12642.39 00:12:21.618 ======================================================== 00:12:21.618 Total : 31563.91 123.30 3040.35 1075.95 13646.26 00:12:21.618 00:12:22.991 Initializing NVMe Controllers 00:12:22.991 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:22.991 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:22.991 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:22.991 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:22.991 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:22.991 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:22.991 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:22.991 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:22.991 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:22.991 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:22.991 Initialization complete. Launching workers. 00:12:22.991 ======================================================== 00:12:22.991 Latency(us) 00:12:22.991 Device Information : IOPS MiB/s Average min max 00:12:22.991 PCIE (0000:00:10.0) NSID 1 from core 2: 3487.90 13.62 4584.96 994.31 16835.04 00:12:22.991 PCIE (0000:00:11.0) NSID 1 from core 2: 3487.90 13.62 4586.53 1002.74 13509.11 00:12:22.991 PCIE (0000:00:13.0) NSID 1 from core 2: 3487.90 13.62 4585.98 1036.00 13405.81 00:12:22.991 PCIE (0000:00:12.0) NSID 1 from core 2: 3487.90 13.62 4586.35 1029.65 17200.74 00:12:22.991 PCIE (0000:00:12.0) NSID 2 from core 2: 3491.10 13.64 4581.60 1021.35 17135.31 00:12:22.991 PCIE (0000:00:12.0) NSID 3 from core 2: 3491.10 13.64 4578.29 1002.37 16896.72 00:12:22.991 ======================================================== 00:12:22.991 Total : 20933.80 81.77 4583.95 994.31 17200.74 00:12:22.991 00:12:22.991 08:31:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65552 00:12:22.991 08:31:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65553 00:12:22.991 00:12:22.991 real 0m10.882s 00:12:22.991 user 0m18.650s 00:12:22.991 sys 0m1.033s 00:12:22.991 08:31:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.991 08:31:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:12:22.991 ************************************ 00:12:22.991 END TEST nvme_multi_secondary 00:12:22.991 ************************************ 00:12:22.991 08:31:02 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:22.991 08:31:02 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:12:22.991 08:31:02 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64484 ]] 00:12:22.991 08:31:02 nvme -- common/autotest_common.sh@1094 -- # kill 64484 00:12:22.991 08:31:02 nvme -- common/autotest_common.sh@1095 -- # wait 64484 00:12:22.991 [2024-11-19 08:31:02.125088] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.991 [2024-11-19 08:31:02.125188] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.991 [2024-11-19 08:31:02.125241] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.991 [2024-11-19 08:31:02.125273] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.991 [2024-11-19 08:31:02.128455] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.991 [2024-11-19 08:31:02.128538] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.991 [2024-11-19 08:31:02.128569] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.992 [2024-11-19 08:31:02.128597] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.992 [2024-11-19 08:31:02.131886] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.992 [2024-11-19 08:31:02.131965] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.992 [2024-11-19 08:31:02.131995] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.992 [2024-11-19 08:31:02.132024] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.992 [2024-11-19 08:31:02.134334] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.992 [2024-11-19 08:31:02.134392] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.992 [2024-11-19 08:31:02.134415] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:22.992 [2024-11-19 08:31:02.134435] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65419) is not found. Dropping the request. 00:12:23.250 08:31:02 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:12:23.250 08:31:02 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:12:23.250 08:31:02 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:23.250 08:31:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.250 08:31:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.250 08:31:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.250 ************************************ 00:12:23.250 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:23.250 ************************************ 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:23.250 * Looking for test storage... 00:12:23.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:23.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.250 --rc genhtml_branch_coverage=1 00:12:23.250 --rc genhtml_function_coverage=1 00:12:23.250 --rc genhtml_legend=1 00:12:23.250 --rc geninfo_all_blocks=1 00:12:23.250 --rc geninfo_unexecuted_blocks=1 00:12:23.250 00:12:23.250 ' 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:23.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.250 --rc genhtml_branch_coverage=1 00:12:23.250 --rc genhtml_function_coverage=1 00:12:23.250 --rc genhtml_legend=1 00:12:23.250 --rc geninfo_all_blocks=1 00:12:23.250 --rc geninfo_unexecuted_blocks=1 00:12:23.250 00:12:23.250 ' 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:23.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.250 --rc genhtml_branch_coverage=1 00:12:23.250 --rc genhtml_function_coverage=1 00:12:23.250 --rc genhtml_legend=1 00:12:23.250 --rc geninfo_all_blocks=1 00:12:23.250 --rc geninfo_unexecuted_blocks=1 00:12:23.250 00:12:23.250 ' 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:23.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.250 --rc genhtml_branch_coverage=1 00:12:23.250 --rc genhtml_function_coverage=1 00:12:23.250 --rc genhtml_legend=1 00:12:23.250 --rc geninfo_all_blocks=1 00:12:23.250 --rc geninfo_unexecuted_blocks=1 00:12:23.250 00:12:23.250 ' 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:23.250 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:23.251 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:23.251 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:23.251 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:23.251 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:12:23.251 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:23.251 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:23.251 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:23.251 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:12:23.251 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:23.251 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:23.251 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:23.509 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:23.509 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:23.509 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:12:23.509 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:23.509 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:23.509 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65715 00:12:23.509 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:23.509 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:23.509 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65715 00:12:23.510 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65715 ']' 00:12:23.510 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.510 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.510 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.510 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.510 08:31:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:23.510 [2024-11-19 08:31:02.721723] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:12:23.510 [2024-11-19 08:31:02.721895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65715 ] 00:12:23.768 [2024-11-19 08:31:02.933027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:24.026 [2024-11-19 08:31:03.063708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.026 [2024-11-19 08:31:03.063864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.026 [2024-11-19 08:31:03.063915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.026 [2024-11-19 08:31:03.063928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.593 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.593 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:12:24.593 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:24.593 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:24.851 nvme0n1 00:12:24.851 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.851 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:24.851 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_Cth0h.txt 00:12:24.851 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:24.851 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.851 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:24.851 true 00:12:24.851 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.851 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:24.851 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732005063 00:12:24.851 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65738 00:12:24.851 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:24.851 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:24.851 08:31:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:26.787 [2024-11-19 08:31:05.958440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:26.787 [2024-11-19 08:31:05.958835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:26.787 [2024-11-19 08:31:05.958878] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:26.787 [2024-11-19 08:31:05.958899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.787 [2024-11-19 08:31:05.960967] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:26.787 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65738 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65738 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65738 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:26.787 08:31:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_Cth0h.txt 00:12:26.787 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:26.787 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:26.787 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:26.787 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:26.787 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:26.787 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:26.787 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:26.787 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:26.787 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:26.787 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:26.787 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_Cth0h.txt 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65715 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65715 ']' 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65715 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.788 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65715 00:12:27.046 killing process with pid 65715 00:12:27.046 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.046 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.046 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65715' 00:12:27.046 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65715 00:12:27.046 08:31:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65715 00:12:28.949 08:31:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:28.949 08:31:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:28.949 ************************************ 00:12:28.949 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:28.949 ************************************ 00:12:28.949 00:12:28.949 real 0m5.839s 00:12:28.949 user 0m20.657s 00:12:28.949 sys 0m0.609s 00:12:28.949 08:31:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.949 08:31:08 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:28.949 08:31:08 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:28.949 08:31:08 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:28.949 08:31:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.949 08:31:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.949 08:31:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.949 ************************************ 00:12:28.949 START TEST nvme_fio 00:12:28.949 ************************************ 00:12:28.949 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:12:28.949 08:31:08 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:28.949 08:31:08 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:28.949 08:31:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:28.949 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:28.949 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:12:28.949 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:28.949 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:28.949 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:29.207 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:29.207 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:29.207 08:31:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:29.207 08:31:08 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:29.207 08:31:08 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:29.207 08:31:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:29.207 08:31:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:29.466 08:31:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:29.466 08:31:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:29.724 08:31:08 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:29.724 08:31:08 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:29.724 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:29.724 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:29.724 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:29.724 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:29.724 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:29.724 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:29.724 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:29.724 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:29.725 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:29.725 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:29.725 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:29.725 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:29.725 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:29.725 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:29.725 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:29.725 08:31:08 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:29.982 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:29.982 fio-3.35 00:12:29.982 Starting 1 thread 00:12:33.346 00:12:33.346 test: (groupid=0, jobs=1): err= 0: pid=65886: Tue Nov 19 08:31:12 2024 00:12:33.346 read: IOPS=15.6k, BW=60.9MiB/s (63.9MB/s)(122MiB/2001msec) 00:12:33.346 slat (nsec): min=4596, max=53980, avg=6399.16, stdev=2408.30 00:12:33.346 clat (usec): min=313, max=9482, avg=4087.85, stdev=802.57 00:12:33.346 lat (usec): min=326, max=9488, avg=4094.25, stdev=803.74 00:12:33.346 clat percentiles (usec): 00:12:33.346 | 1.00th=[ 2704], 5.00th=[ 3294], 10.00th=[ 3425], 20.00th=[ 3556], 00:12:33.346 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3982], 00:12:33.346 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 5014], 95.00th=[ 5669], 00:12:33.346 | 99.00th=[ 7046], 99.50th=[ 7767], 99.90th=[ 8717], 99.95th=[ 8979], 00:12:33.346 | 99.99th=[ 9503] 00:12:33.346 bw ( KiB/s): min=58536, max=66472, per=99.96%, avg=62360.00, stdev=3975.83, samples=3 00:12:33.346 iops : min=14634, max=16618, avg=15590.00, stdev=993.96, samples=3 00:12:33.346 write: IOPS=15.6k, BW=60.9MiB/s (63.9MB/s)(122MiB/2001msec); 0 zone resets 00:12:33.346 slat (nsec): min=4671, max=47508, avg=6481.38, stdev=2392.03 00:12:33.346 clat (usec): min=412, max=9490, avg=4085.25, stdev=796.71 00:12:33.346 lat (usec): min=419, max=9497, avg=4091.73, stdev=797.84 00:12:33.346 clat percentiles (usec): 00:12:33.346 | 1.00th=[ 2769], 5.00th=[ 3294], 10.00th=[ 3458], 20.00th=[ 3556], 00:12:33.346 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3949], 00:12:33.346 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 5014], 95.00th=[ 5669], 00:12:33.346 | 99.00th=[ 6915], 99.50th=[ 7767], 99.90th=[ 8717], 99.95th=[ 8979], 00:12:33.346 | 99.99th=[ 9503] 00:12:33.346 bw ( KiB/s): min=58824, max=65944, per=99.22%, avg=61914.67, stdev=3651.63, samples=3 00:12:33.346 iops : min=14706, max=16486, avg=15478.67, stdev=912.91, samples=3 00:12:33.346 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:12:33.346 lat (msec) : 2=0.06%, 4=60.65%, 10=39.25% 00:12:33.346 cpu : usr=98.85%, sys=0.15%, ctx=3, majf=0, minf=607 00:12:33.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:33.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:33.346 issued rwts: total=31207,31217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:33.346 00:12:33.346 Run status group 0 (all jobs): 00:12:33.346 READ: bw=60.9MiB/s (63.9MB/s), 60.9MiB/s-60.9MiB/s (63.9MB/s-63.9MB/s), io=122MiB (128MB), run=2001-2001msec 00:12:33.347 WRITE: bw=60.9MiB/s (63.9MB/s), 60.9MiB/s-60.9MiB/s (63.9MB/s-63.9MB/s), io=122MiB (128MB), run=2001-2001msec 00:12:33.347 ----------------------------------------------------- 00:12:33.347 Suppressions used: 00:12:33.347 count bytes template 00:12:33.347 1 32 /usr/src/fio/parse.c 00:12:33.347 1 8 libtcmalloc_minimal.so 00:12:33.347 ----------------------------------------------------- 00:12:33.347 00:12:33.347 08:31:12 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:33.347 08:31:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:33.347 08:31:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:33.347 08:31:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:33.605 08:31:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:33.605 08:31:12 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:33.864 08:31:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:33.864 08:31:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:33.864 08:31:13 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:34.123 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:34.123 fio-3.35 00:12:34.123 Starting 1 thread 00:12:37.407 00:12:37.407 test: (groupid=0, jobs=1): err= 0: pid=65952: Tue Nov 19 08:31:16 2024 00:12:37.407 read: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(123MiB/2001msec) 00:12:37.407 slat (usec): min=4, max=316, avg= 6.25, stdev= 2.78 00:12:37.407 clat (usec): min=317, max=10470, avg=4034.74, stdev=674.49 00:12:37.407 lat (usec): min=322, max=10526, avg=4040.99, stdev=675.27 00:12:37.407 clat percentiles (usec): 00:12:37.407 | 1.00th=[ 2540], 5.00th=[ 3195], 10.00th=[ 3392], 20.00th=[ 3523], 00:12:37.407 | 30.00th=[ 3654], 40.00th=[ 3752], 50.00th=[ 3982], 60.00th=[ 4178], 00:12:37.407 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 5145], 00:12:37.407 | 99.00th=[ 6128], 99.50th=[ 6718], 99.90th=[ 7963], 99.95th=[ 9110], 00:12:37.407 | 99.99th=[10290] 00:12:37.407 bw ( KiB/s): min=55080, max=68304, per=96.78%, avg=61136.00, stdev=6681.76, samples=3 00:12:37.407 iops : min=13770, max=17076, avg=15284.00, stdev=1670.44, samples=3 00:12:37.408 write: IOPS=15.8k, BW=61.8MiB/s (64.8MB/s)(124MiB/2001msec); 0 zone resets 00:12:37.408 slat (usec): min=4, max=159, avg= 6.38, stdev= 2.50 00:12:37.408 clat (usec): min=260, max=10394, avg=4033.53, stdev=678.89 00:12:37.408 lat (usec): min=265, max=10410, avg=4039.91, stdev=679.69 00:12:37.408 clat percentiles (usec): 00:12:37.408 | 1.00th=[ 2507], 5.00th=[ 3163], 10.00th=[ 3392], 20.00th=[ 3523], 00:12:37.408 | 30.00th=[ 3654], 40.00th=[ 3752], 50.00th=[ 3982], 60.00th=[ 4178], 00:12:37.408 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 5145], 00:12:37.408 | 99.00th=[ 6128], 99.50th=[ 6718], 99.90th=[ 8160], 99.95th=[ 9241], 00:12:37.408 | 99.99th=[10159] 00:12:37.408 bw ( KiB/s): min=55456, max=67736, per=95.95%, avg=60690.67, stdev=6337.07, samples=3 00:12:37.408 iops : min=13864, max=16934, avg=15172.67, stdev=1584.27, samples=3 00:12:37.408 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.02% 00:12:37.408 lat (msec) : 2=0.17%, 4=50.44%, 10=49.32%, 20=0.02% 00:12:37.408 cpu : usr=98.20%, sys=0.50%, ctx=25, majf=0, minf=608 00:12:37.408 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:37.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:37.408 issued rwts: total=31602,31642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.408 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:37.408 00:12:37.408 Run status group 0 (all jobs): 00:12:37.408 READ: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=123MiB (129MB), run=2001-2001msec 00:12:37.408 WRITE: bw=61.8MiB/s (64.8MB/s), 61.8MiB/s-61.8MiB/s (64.8MB/s-64.8MB/s), io=124MiB (130MB), run=2001-2001msec 00:12:37.408 ----------------------------------------------------- 00:12:37.408 Suppressions used: 00:12:37.408 count bytes template 00:12:37.408 1 32 /usr/src/fio/parse.c 00:12:37.408 1 8 libtcmalloc_minimal.so 00:12:37.408 ----------------------------------------------------- 00:12:37.408 00:12:37.408 08:31:16 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:37.408 08:31:16 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:37.408 08:31:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:37.408 08:31:16 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:37.667 08:31:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:37.667 08:31:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:37.924 08:31:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:37.925 08:31:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:37.925 08:31:17 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:38.183 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:38.183 fio-3.35 00:12:38.183 Starting 1 thread 00:12:41.467 00:12:41.467 test: (groupid=0, jobs=1): err= 0: pid=66013: Tue Nov 19 08:31:20 2024 00:12:41.467 read: IOPS=14.2k, BW=55.5MiB/s (58.2MB/s)(111MiB/2001msec) 00:12:41.467 slat (usec): min=4, max=102, avg= 7.07, stdev= 2.85 00:12:41.467 clat (usec): min=296, max=9074, avg=4485.34, stdev=888.06 00:12:41.467 lat (usec): min=302, max=9081, avg=4492.40, stdev=889.40 00:12:41.467 clat percentiles (usec): 00:12:41.467 | 1.00th=[ 2933], 5.00th=[ 3523], 10.00th=[ 3654], 20.00th=[ 3818], 00:12:41.467 | 30.00th=[ 3982], 40.00th=[ 4228], 50.00th=[ 4424], 60.00th=[ 4490], 00:12:41.467 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 5538], 95.00th=[ 6521], 00:12:41.467 | 99.00th=[ 7701], 99.50th=[ 8029], 99.90th=[ 8455], 99.95th=[ 8717], 00:12:41.467 | 99.99th=[ 9110] 00:12:41.467 bw ( KiB/s): min=51128, max=58848, per=98.23%, avg=55845.33, stdev=4135.78, samples=3 00:12:41.467 iops : min=12782, max=14712, avg=13961.33, stdev=1033.94, samples=3 00:12:41.467 write: IOPS=14.2k, BW=55.5MiB/s (58.2MB/s)(111MiB/2001msec); 0 zone resets 00:12:41.467 slat (nsec): min=4704, max=68647, avg=7131.26, stdev=2746.38 00:12:41.467 clat (usec): min=307, max=9133, avg=4482.94, stdev=878.06 00:12:41.467 lat (usec): min=313, max=9139, avg=4490.07, stdev=879.37 00:12:41.467 clat percentiles (usec): 00:12:41.467 | 1.00th=[ 2933], 5.00th=[ 3523], 10.00th=[ 3654], 20.00th=[ 3818], 00:12:41.467 | 30.00th=[ 3982], 40.00th=[ 4228], 50.00th=[ 4424], 60.00th=[ 4490], 00:12:41.467 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5538], 95.00th=[ 6521], 00:12:41.467 | 99.00th=[ 7635], 99.50th=[ 7963], 99.90th=[ 8586], 99.95th=[ 8717], 00:12:41.467 | 99.99th=[ 9110] 00:12:41.467 bw ( KiB/s): min=51696, max=58040, per=98.22%, avg=55869.33, stdev=3615.19, samples=3 00:12:41.467 iops : min=12924, max=14510, avg=13967.33, stdev=903.80, samples=3 00:12:41.467 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:12:41.467 lat (msec) : 2=0.05%, 4=30.42%, 10=69.49% 00:12:41.467 cpu : usr=98.80%, sys=0.00%, ctx=3, majf=0, minf=607 00:12:41.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:41.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:41.467 issued rwts: total=28439,28456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:41.467 00:12:41.467 Run status group 0 (all jobs): 00:12:41.467 READ: bw=55.5MiB/s (58.2MB/s), 55.5MiB/s-55.5MiB/s (58.2MB/s-58.2MB/s), io=111MiB (116MB), run=2001-2001msec 00:12:41.467 WRITE: bw=55.5MiB/s (58.2MB/s), 55.5MiB/s-55.5MiB/s (58.2MB/s-58.2MB/s), io=111MiB (117MB), run=2001-2001msec 00:12:41.467 ----------------------------------------------------- 00:12:41.467 Suppressions used: 00:12:41.467 count bytes template 00:12:41.467 1 32 /usr/src/fio/parse.c 00:12:41.467 1 8 libtcmalloc_minimal.so 00:12:41.467 ----------------------------------------------------- 00:12:41.467 00:12:41.467 08:31:20 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:41.467 08:31:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:41.467 08:31:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:41.467 08:31:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:41.726 08:31:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:41.726 08:31:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:42.292 08:31:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:42.292 08:31:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:42.292 08:31:21 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:42.292 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:42.292 fio-3.35 00:12:42.292 Starting 1 thread 00:12:46.476 00:12:46.476 test: (groupid=0, jobs=1): err= 0: pid=66079: Tue Nov 19 08:31:25 2024 00:12:46.476 read: IOPS=15.2k, BW=59.2MiB/s (62.1MB/s)(119MiB/2001msec) 00:12:46.476 slat (nsec): min=4569, max=66482, avg=6543.84, stdev=2645.96 00:12:46.476 clat (usec): min=679, max=8396, avg=4199.98, stdev=819.32 00:12:46.476 lat (usec): min=692, max=8435, avg=4206.52, stdev=820.70 00:12:46.476 clat percentiles (usec): 00:12:46.476 | 1.00th=[ 2999], 5.00th=[ 3392], 10.00th=[ 3523], 20.00th=[ 3654], 00:12:46.476 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 4146], 00:12:46.476 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 5473], 95.00th=[ 5932], 00:12:46.476 | 99.00th=[ 7046], 99.50th=[ 7439], 99.90th=[ 7898], 99.95th=[ 7963], 00:12:46.476 | 99.99th=[ 8225] 00:12:46.476 bw ( KiB/s): min=57136, max=67312, per=100.00%, avg=60845.33, stdev=5620.49, samples=3 00:12:46.476 iops : min=14284, max=16828, avg=15211.33, stdev=1405.12, samples=3 00:12:46.476 write: IOPS=15.2k, BW=59.4MiB/s (62.2MB/s)(119MiB/2001msec); 0 zone resets 00:12:46.476 slat (nsec): min=4655, max=88090, avg=6717.30, stdev=2755.31 00:12:46.476 clat (usec): min=502, max=8444, avg=4201.88, stdev=818.10 00:12:46.476 lat (usec): min=515, max=8450, avg=4208.60, stdev=819.50 00:12:46.476 clat percentiles (usec): 00:12:46.476 | 1.00th=[ 3032], 5.00th=[ 3425], 10.00th=[ 3556], 20.00th=[ 3654], 00:12:46.476 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3884], 60.00th=[ 4146], 00:12:46.476 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 5473], 95.00th=[ 5932], 00:12:46.476 | 99.00th=[ 7046], 99.50th=[ 7373], 99.90th=[ 7898], 99.95th=[ 7963], 00:12:46.476 | 99.99th=[ 8225] 00:12:46.476 bw ( KiB/s): min=57240, max=66736, per=99.49%, avg=60480.00, stdev=5419.01, samples=3 00:12:46.476 iops : min=14310, max=16684, avg=15120.00, stdev=1354.75, samples=3 00:12:46.476 lat (usec) : 750=0.01%, 1000=0.01% 00:12:46.476 lat (msec) : 2=0.03%, 4=56.39%, 10=43.56% 00:12:46.476 cpu : usr=98.85%, sys=0.05%, ctx=5, majf=0, minf=605 00:12:46.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:46.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:46.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:46.476 issued rwts: total=30351,30409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:46.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:46.476 00:12:46.476 Run status group 0 (all jobs): 00:12:46.476 READ: bw=59.2MiB/s (62.1MB/s), 59.2MiB/s-59.2MiB/s (62.1MB/s-62.1MB/s), io=119MiB (124MB), run=2001-2001msec 00:12:46.476 WRITE: bw=59.4MiB/s (62.2MB/s), 59.4MiB/s-59.4MiB/s (62.2MB/s-62.2MB/s), io=119MiB (125MB), run=2001-2001msec 00:12:46.476 ----------------------------------------------------- 00:12:46.476 Suppressions used: 00:12:46.476 count bytes template 00:12:46.476 1 32 /usr/src/fio/parse.c 00:12:46.476 1 8 libtcmalloc_minimal.so 00:12:46.476 ----------------------------------------------------- 00:12:46.476 00:12:46.476 08:31:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:46.476 08:31:25 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:46.476 00:12:46.476 real 0m17.359s 00:12:46.476 user 0m13.745s 00:12:46.476 sys 0m2.607s 00:12:46.476 08:31:25 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.476 08:31:25 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:46.476 ************************************ 00:12:46.476 END TEST nvme_fio 00:12:46.476 ************************************ 00:12:46.476 ************************************ 00:12:46.476 END TEST nvme 00:12:46.476 ************************************ 00:12:46.476 00:12:46.476 real 1m31.733s 00:12:46.476 user 3m47.176s 00:12:46.476 sys 0m15.078s 00:12:46.476 08:31:25 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.476 08:31:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:46.476 08:31:25 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:12:46.476 08:31:25 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:46.476 08:31:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:46.476 08:31:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.476 08:31:25 -- common/autotest_common.sh@10 -- # set +x 00:12:46.476 ************************************ 00:12:46.476 START TEST nvme_scc 00:12:46.476 ************************************ 00:12:46.476 08:31:25 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:46.476 * Looking for test storage... 00:12:46.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:46.476 08:31:25 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:46.476 08:31:25 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:46.476 08:31:25 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:46.734 08:31:25 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@345 -- # : 1 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@368 -- # return 0 00:12:46.735 08:31:25 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.735 08:31:25 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:46.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.735 --rc genhtml_branch_coverage=1 00:12:46.735 --rc genhtml_function_coverage=1 00:12:46.735 --rc genhtml_legend=1 00:12:46.735 --rc geninfo_all_blocks=1 00:12:46.735 --rc geninfo_unexecuted_blocks=1 00:12:46.735 00:12:46.735 ' 00:12:46.735 08:31:25 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:46.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.735 --rc genhtml_branch_coverage=1 00:12:46.735 --rc genhtml_function_coverage=1 00:12:46.735 --rc genhtml_legend=1 00:12:46.735 --rc geninfo_all_blocks=1 00:12:46.735 --rc geninfo_unexecuted_blocks=1 00:12:46.735 00:12:46.735 ' 00:12:46.735 08:31:25 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:46.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.735 --rc genhtml_branch_coverage=1 00:12:46.735 --rc genhtml_function_coverage=1 00:12:46.735 --rc genhtml_legend=1 00:12:46.735 --rc geninfo_all_blocks=1 00:12:46.735 --rc geninfo_unexecuted_blocks=1 00:12:46.735 00:12:46.735 ' 00:12:46.735 08:31:25 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:46.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.735 --rc genhtml_branch_coverage=1 00:12:46.735 --rc genhtml_function_coverage=1 00:12:46.735 --rc genhtml_legend=1 00:12:46.735 --rc geninfo_all_blocks=1 00:12:46.735 --rc geninfo_unexecuted_blocks=1 00:12:46.735 00:12:46.735 ' 00:12:46.735 08:31:25 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:46.735 08:31:25 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:46.735 08:31:25 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:46.735 08:31:25 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:46.735 08:31:25 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.735 08:31:25 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.735 08:31:25 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.735 08:31:25 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.735 08:31:25 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.735 08:31:25 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:46.735 08:31:25 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.735 08:31:25 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:12:46.735 08:31:25 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:46.735 08:31:25 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:12:46.735 08:31:25 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:46.735 08:31:25 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:12:46.735 08:31:25 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:46.735 08:31:25 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:46.735 08:31:25 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:46.735 08:31:25 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:12:46.735 08:31:25 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:46.735 08:31:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:46.735 08:31:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:46.735 08:31:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:46.735 08:31:25 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:46.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:47.252 Waiting for block devices as requested 00:12:47.252 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:47.252 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:47.510 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:47.510 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:52.778 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:52.778 08:31:31 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:52.778 08:31:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:52.778 08:31:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:52.778 08:31:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:52.778 08:31:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:52.778 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.779 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.780 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:52.781 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:52.782 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:52.783 08:31:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:52.783 08:31:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:52.783 08:31:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:52.783 08:31:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:52.783 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.784 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.785 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:52.786 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:52.787 08:31:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:52.787 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:52.788 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:52.789 08:31:32 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:52.789 08:31:32 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:52.789 08:31:32 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:52.789 08:31:32 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:52.789 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:52.790 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.054 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.055 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:53.056 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:53.057 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.058 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:53.059 08:31:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.060 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:53.061 08:31:32 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:53.061 08:31:32 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:53.061 08:31:32 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:53.061 08:31:32 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:53.061 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.062 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:53.063 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:53.064 08:31:32 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:12:53.064 08:31:32 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:12:53.065 08:31:32 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:12:53.324 08:31:32 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:12:53.324 08:31:32 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:53.324 08:31:32 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:12:53.324 08:31:32 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:53.581 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:54.196 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:54.196 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:54.196 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:54.453 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:54.453 08:31:33 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:54.453 08:31:33 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:54.453 08:31:33 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.453 08:31:33 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:54.453 ************************************ 00:12:54.453 START TEST nvme_simple_copy 00:12:54.453 ************************************ 00:12:54.453 08:31:33 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:54.712 Initializing NVMe Controllers 00:12:54.712 Attaching to 0000:00:10.0 00:12:54.712 Controller supports SCC. Attached to 0000:00:10.0 00:12:54.712 Namespace ID: 1 size: 6GB 00:12:54.712 Initialization complete. 00:12:54.712 00:12:54.712 Controller QEMU NVMe Ctrl (12340 ) 00:12:54.712 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:54.712 Namespace Block Size:4096 00:12:54.712 Writing LBAs 0 to 63 with Random Data 00:12:54.712 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:54.712 LBAs matching Written Data: 64 00:12:54.712 00:12:54.712 real 0m0.296s 00:12:54.712 user 0m0.120s 00:12:54.712 sys 0m0.074s 00:12:54.712 ************************************ 00:12:54.712 END TEST nvme_simple_copy 00:12:54.712 ************************************ 00:12:54.712 08:31:33 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.712 08:31:33 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:12:54.712 ************************************ 00:12:54.712 END TEST nvme_scc 00:12:54.712 ************************************ 00:12:54.712 00:12:54.712 real 0m8.236s 00:12:54.712 user 0m1.412s 00:12:54.712 sys 0m1.702s 00:12:54.712 08:31:33 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.712 08:31:33 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:54.712 08:31:33 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:12:54.712 08:31:33 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:12:54.712 08:31:33 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:12:54.712 08:31:33 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:12:54.712 08:31:33 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:54.712 08:31:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:54.712 08:31:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.712 08:31:33 -- common/autotest_common.sh@10 -- # set +x 00:12:54.712 ************************************ 00:12:54.712 START TEST nvme_fdp 00:12:54.712 ************************************ 00:12:54.712 08:31:33 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:12:54.970 * Looking for test storage... 00:12:54.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:54.970 08:31:34 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:54.970 08:31:34 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:12:54.970 08:31:34 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:54.970 08:31:34 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.970 08:31:34 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:12:54.970 08:31:34 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.970 08:31:34 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:54.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.970 --rc genhtml_branch_coverage=1 00:12:54.970 --rc genhtml_function_coverage=1 00:12:54.970 --rc genhtml_legend=1 00:12:54.970 --rc geninfo_all_blocks=1 00:12:54.970 --rc geninfo_unexecuted_blocks=1 00:12:54.970 00:12:54.970 ' 00:12:54.970 08:31:34 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:54.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.970 --rc genhtml_branch_coverage=1 00:12:54.970 --rc genhtml_function_coverage=1 00:12:54.970 --rc genhtml_legend=1 00:12:54.970 --rc geninfo_all_blocks=1 00:12:54.970 --rc geninfo_unexecuted_blocks=1 00:12:54.970 00:12:54.970 ' 00:12:54.970 08:31:34 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:54.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.970 --rc genhtml_branch_coverage=1 00:12:54.970 --rc genhtml_function_coverage=1 00:12:54.970 --rc genhtml_legend=1 00:12:54.970 --rc geninfo_all_blocks=1 00:12:54.970 --rc geninfo_unexecuted_blocks=1 00:12:54.970 00:12:54.970 ' 00:12:54.970 08:31:34 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:54.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.970 --rc genhtml_branch_coverage=1 00:12:54.970 --rc genhtml_function_coverage=1 00:12:54.970 --rc genhtml_legend=1 00:12:54.970 --rc geninfo_all_blocks=1 00:12:54.970 --rc geninfo_unexecuted_blocks=1 00:12:54.970 00:12:54.970 ' 00:12:54.970 08:31:34 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:54.971 08:31:34 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:54.971 08:31:34 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:54.971 08:31:34 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:54.971 08:31:34 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:54.971 08:31:34 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.971 08:31:34 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.971 08:31:34 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.971 08:31:34 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.971 08:31:34 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.971 08:31:34 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.971 08:31:34 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.971 08:31:34 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:12:54.971 08:31:34 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.971 08:31:34 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:12:54.971 08:31:34 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:54.971 08:31:34 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:12:54.971 08:31:34 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:54.971 08:31:34 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:12:54.971 08:31:34 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:54.971 08:31:34 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:54.971 08:31:34 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:54.971 08:31:34 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:12:54.971 08:31:34 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:54.971 08:31:34 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:55.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:55.486 Waiting for block devices as requested 00:12:55.486 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:55.745 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:55.745 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:55.745 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:01.018 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:01.018 08:31:40 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:01.018 08:31:40 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:01.018 08:31:40 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:01.018 08:31:40 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:01.018 08:31:40 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:01.018 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:01.019 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:01.020 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.021 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:01.022 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:01.023 08:31:40 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:01.023 08:31:40 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:01.023 08:31:40 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:01.023 08:31:40 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:01.023 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.024 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.025 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:01.026 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:01.027 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:01.028 08:31:40 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:01.028 08:31:40 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:01.028 08:31:40 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:01.028 08:31:40 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:01.028 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.029 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.030 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.031 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.032 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.294 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.295 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:01.296 08:31:40 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.297 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:01.298 08:31:40 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:01.298 08:31:40 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:01.298 08:31:40 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:01.298 08:31:40 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.298 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:01.299 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.300 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.301 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:01.302 08:31:40 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:13:01.302 08:31:40 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:13:01.302 08:31:40 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:01.302 08:31:40 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:01.302 08:31:40 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:01.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:02.435 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:02.435 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:02.435 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:02.435 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:02.435 08:31:41 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:02.435 08:31:41 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:02.435 08:31:41 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.435 08:31:41 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:02.435 ************************************ 00:13:02.435 START TEST nvme_flexible_data_placement 00:13:02.435 ************************************ 00:13:02.435 08:31:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:02.694 Initializing NVMe Controllers 00:13:02.694 Attaching to 0000:00:13.0 00:13:02.694 Controller supports FDP Attached to 0000:00:13.0 00:13:02.694 Namespace ID: 1 Endurance Group ID: 1 00:13:02.694 Initialization complete. 00:13:02.694 00:13:02.694 ================================== 00:13:02.694 == FDP tests for Namespace: #01 == 00:13:02.694 ================================== 00:13:02.694 00:13:02.694 Get Feature: FDP: 00:13:02.694 ================= 00:13:02.694 Enabled: Yes 00:13:02.694 FDP configuration Index: 0 00:13:02.694 00:13:02.694 FDP configurations log page 00:13:02.694 =========================== 00:13:02.694 Number of FDP configurations: 1 00:13:02.694 Version: 0 00:13:02.694 Size: 112 00:13:02.694 FDP Configuration Descriptor: 0 00:13:02.694 Descriptor Size: 96 00:13:02.694 Reclaim Group Identifier format: 2 00:13:02.694 FDP Volatile Write Cache: Not Present 00:13:02.694 FDP Configuration: Valid 00:13:02.694 Vendor Specific Size: 0 00:13:02.694 Number of Reclaim Groups: 2 00:13:02.694 Number of Recalim Unit Handles: 8 00:13:02.694 Max Placement Identifiers: 128 00:13:02.694 Number of Namespaces Suppprted: 256 00:13:02.694 Reclaim unit Nominal Size: 6000000 bytes 00:13:02.694 Estimated Reclaim Unit Time Limit: Not Reported 00:13:02.694 RUH Desc #000: RUH Type: Initially Isolated 00:13:02.694 RUH Desc #001: RUH Type: Initially Isolated 00:13:02.694 RUH Desc #002: RUH Type: Initially Isolated 00:13:02.694 RUH Desc #003: RUH Type: Initially Isolated 00:13:02.694 RUH Desc #004: RUH Type: Initially Isolated 00:13:02.694 RUH Desc #005: RUH Type: Initially Isolated 00:13:02.694 RUH Desc #006: RUH Type: Initially Isolated 00:13:02.694 RUH Desc #007: RUH Type: Initially Isolated 00:13:02.694 00:13:02.694 FDP reclaim unit handle usage log page 00:13:02.694 ====================================== 00:13:02.694 Number of Reclaim Unit Handles: 8 00:13:02.694 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:02.694 RUH Usage Desc #001: RUH Attributes: Unused 00:13:02.694 RUH Usage Desc #002: RUH Attributes: Unused 00:13:02.694 RUH Usage Desc #003: RUH Attributes: Unused 00:13:02.694 RUH Usage Desc #004: RUH Attributes: Unused 00:13:02.694 RUH Usage Desc #005: RUH Attributes: Unused 00:13:02.694 RUH Usage Desc #006: RUH Attributes: Unused 00:13:02.694 RUH Usage Desc #007: RUH Attributes: Unused 00:13:02.694 00:13:02.694 FDP statistics log page 00:13:02.694 ======================= 00:13:02.694 Host bytes with metadata written: 791887872 00:13:02.694 Media bytes with metadata written: 792051712 00:13:02.694 Media bytes erased: 0 00:13:02.694 00:13:02.694 FDP Reclaim unit handle status 00:13:02.694 ============================== 00:13:02.694 Number of RUHS descriptors: 2 00:13:02.694 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000ccc 00:13:02.694 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:02.694 00:13:02.694 FDP write on placement id: 0 success 00:13:02.694 00:13:02.694 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:02.694 00:13:02.694 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:02.694 00:13:02.695 Get Feature: FDP Events for Placement handle: #0 00:13:02.695 ======================== 00:13:02.695 Number of FDP Events: 6 00:13:02.695 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:02.695 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:02.695 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:02.695 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:02.695 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:02.695 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:02.695 00:13:02.695 FDP events log page 00:13:02.695 =================== 00:13:02.695 Number of FDP events: 1 00:13:02.695 FDP Event #0: 00:13:02.695 Event Type: RU Not Written to Capacity 00:13:02.695 Placement Identifier: Valid 00:13:02.695 NSID: Valid 00:13:02.695 Location: Valid 00:13:02.695 Placement Identifier: 0 00:13:02.695 Event Timestamp: 8 00:13:02.695 Namespace Identifier: 1 00:13:02.695 Reclaim Group Identifier: 0 00:13:02.695 Reclaim Unit Handle Identifier: 0 00:13:02.695 00:13:02.695 FDP test passed 00:13:02.695 00:13:02.695 real 0m0.303s 00:13:02.695 user 0m0.115s 00:13:02.695 sys 0m0.086s 00:13:02.695 08:31:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.695 08:31:41 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:13:02.695 ************************************ 00:13:02.695 END TEST nvme_flexible_data_placement 00:13:02.695 ************************************ 00:13:02.695 00:13:02.695 real 0m8.009s 00:13:02.695 user 0m1.348s 00:13:02.695 sys 0m1.657s 00:13:02.695 08:31:41 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.695 08:31:41 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:02.695 ************************************ 00:13:02.695 END TEST nvme_fdp 00:13:02.695 ************************************ 00:13:02.954 08:31:42 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:13:02.954 08:31:42 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:02.954 08:31:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:02.954 08:31:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.954 08:31:42 -- common/autotest_common.sh@10 -- # set +x 00:13:02.954 ************************************ 00:13:02.954 START TEST nvme_rpc 00:13:02.954 ************************************ 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:02.954 * Looking for test storage... 00:13:02.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:02.954 08:31:42 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:02.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.954 --rc genhtml_branch_coverage=1 00:13:02.954 --rc genhtml_function_coverage=1 00:13:02.954 --rc genhtml_legend=1 00:13:02.954 --rc geninfo_all_blocks=1 00:13:02.954 --rc geninfo_unexecuted_blocks=1 00:13:02.954 00:13:02.954 ' 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:02.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.954 --rc genhtml_branch_coverage=1 00:13:02.954 --rc genhtml_function_coverage=1 00:13:02.954 --rc genhtml_legend=1 00:13:02.954 --rc geninfo_all_blocks=1 00:13:02.954 --rc geninfo_unexecuted_blocks=1 00:13:02.954 00:13:02.954 ' 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:02.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.954 --rc genhtml_branch_coverage=1 00:13:02.954 --rc genhtml_function_coverage=1 00:13:02.954 --rc genhtml_legend=1 00:13:02.954 --rc geninfo_all_blocks=1 00:13:02.954 --rc geninfo_unexecuted_blocks=1 00:13:02.954 00:13:02.954 ' 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:02.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.954 --rc genhtml_branch_coverage=1 00:13:02.954 --rc genhtml_function_coverage=1 00:13:02.954 --rc genhtml_legend=1 00:13:02.954 --rc geninfo_all_blocks=1 00:13:02.954 --rc geninfo_unexecuted_blocks=1 00:13:02.954 00:13:02.954 ' 00:13:02.954 08:31:42 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.954 08:31:42 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:02.954 08:31:42 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:03.213 08:31:42 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:03.213 08:31:42 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:03.213 08:31:42 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:03.213 08:31:42 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:03.213 08:31:42 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67450 00:13:03.213 08:31:42 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:03.213 08:31:42 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:03.213 08:31:42 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67450 00:13:03.213 08:31:42 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67450 ']' 00:13:03.213 08:31:42 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.213 08:31:42 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.213 08:31:42 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.213 08:31:42 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.213 08:31:42 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.213 [2024-11-19 08:31:42.409022] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:03.213 [2024-11-19 08:31:42.409195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67450 ] 00:13:03.471 [2024-11-19 08:31:42.605238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:03.471 [2024-11-19 08:31:42.737585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.471 [2024-11-19 08:31:42.737585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.407 08:31:43 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.407 08:31:43 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:04.407 08:31:43 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:04.665 Nvme0n1 00:13:04.665 08:31:43 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:04.665 08:31:43 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:04.923 request: 00:13:04.923 { 00:13:04.924 "bdev_name": "Nvme0n1", 00:13:04.924 "filename": "non_existing_file", 00:13:04.924 "method": "bdev_nvme_apply_firmware", 00:13:04.924 "req_id": 1 00:13:04.924 } 00:13:04.924 Got JSON-RPC error response 00:13:04.924 response: 00:13:04.924 { 00:13:04.924 "code": -32603, 00:13:04.924 "message": "open file failed." 00:13:04.924 } 00:13:04.924 08:31:44 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:04.924 08:31:44 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:04.924 08:31:44 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:05.184 08:31:44 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:05.184 08:31:44 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67450 00:13:05.184 08:31:44 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67450 ']' 00:13:05.184 08:31:44 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67450 00:13:05.184 08:31:44 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:05.184 08:31:44 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.184 08:31:44 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67450 00:13:05.442 08:31:44 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.442 killing process with pid 67450 00:13:05.442 08:31:44 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.442 08:31:44 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67450' 00:13:05.442 08:31:44 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67450 00:13:05.442 08:31:44 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67450 00:13:07.338 00:13:07.338 real 0m4.433s 00:13:07.338 user 0m8.712s 00:13:07.338 sys 0m0.626s 00:13:07.338 08:31:46 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.338 08:31:46 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.338 ************************************ 00:13:07.338 END TEST nvme_rpc 00:13:07.338 ************************************ 00:13:07.338 08:31:46 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:07.338 08:31:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:07.338 08:31:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.338 08:31:46 -- common/autotest_common.sh@10 -- # set +x 00:13:07.338 ************************************ 00:13:07.338 START TEST nvme_rpc_timeouts 00:13:07.338 ************************************ 00:13:07.338 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:07.338 * Looking for test storage... 00:13:07.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:07.338 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:07.338 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:07.338 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:13:07.596 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:07.596 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.596 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.596 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.596 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:13:07.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.597 08:31:46 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:13:07.597 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.597 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:07.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.597 --rc genhtml_branch_coverage=1 00:13:07.597 --rc genhtml_function_coverage=1 00:13:07.597 --rc genhtml_legend=1 00:13:07.597 --rc geninfo_all_blocks=1 00:13:07.597 --rc geninfo_unexecuted_blocks=1 00:13:07.597 00:13:07.597 ' 00:13:07.597 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:07.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.597 --rc genhtml_branch_coverage=1 00:13:07.597 --rc genhtml_function_coverage=1 00:13:07.597 --rc genhtml_legend=1 00:13:07.597 --rc geninfo_all_blocks=1 00:13:07.597 --rc geninfo_unexecuted_blocks=1 00:13:07.597 00:13:07.597 ' 00:13:07.597 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:07.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.597 --rc genhtml_branch_coverage=1 00:13:07.597 --rc genhtml_function_coverage=1 00:13:07.597 --rc genhtml_legend=1 00:13:07.597 --rc geninfo_all_blocks=1 00:13:07.597 --rc geninfo_unexecuted_blocks=1 00:13:07.597 00:13:07.597 ' 00:13:07.597 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:07.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.597 --rc genhtml_branch_coverage=1 00:13:07.597 --rc genhtml_function_coverage=1 00:13:07.597 --rc genhtml_legend=1 00:13:07.597 --rc geninfo_all_blocks=1 00:13:07.597 --rc geninfo_unexecuted_blocks=1 00:13:07.597 00:13:07.597 ' 00:13:07.597 08:31:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:07.597 08:31:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67528 00:13:07.597 08:31:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67528 00:13:07.597 08:31:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67560 00:13:07.597 08:31:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:07.597 08:31:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:07.597 08:31:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67560 00:13:07.597 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67560 ']' 00:13:07.597 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.597 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.597 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.597 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.597 08:31:46 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:07.597 [2024-11-19 08:31:46.823073] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:13:07.597 [2024-11-19 08:31:46.823447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67560 ] 00:13:07.855 [2024-11-19 08:31:47.005560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:07.855 [2024-11-19 08:31:47.112394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.855 [2024-11-19 08:31:47.112398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.789 Checking default timeout settings: 00:13:08.789 08:31:47 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.789 08:31:47 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:13:08.789 08:31:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:08.789 08:31:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:09.357 Making settings changes with rpc: 00:13:09.357 08:31:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:09.357 08:31:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:09.615 08:31:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:09.615 Check default vs. modified settings: 00:13:09.615 08:31:48 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67528 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67528 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:10.180 Setting action_on_timeout is changed as expected. 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67528 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67528 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:10.180 Setting timeout_us is changed as expected. 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67528 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67528 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:10.180 Setting timeout_admin_us is changed as expected. 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67528 /tmp/settings_modified_67528 00:13:10.180 08:31:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67560 00:13:10.180 08:31:49 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67560 ']' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67560 00:13:10.180 08:31:49 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:13:10.180 08:31:49 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67560 00:13:10.180 killing process with pid 67560 00:13:10.180 08:31:49 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:10.180 08:31:49 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67560' 00:13:10.180 08:31:49 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67560 00:13:10.180 08:31:49 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67560 00:13:12.081 RPC TIMEOUT SETTING TEST PASSED. 00:13:12.081 08:31:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:12.081 00:13:12.081 real 0m4.866s 00:13:12.081 user 0m9.843s 00:13:12.081 sys 0m0.611s 00:13:12.081 ************************************ 00:13:12.081 END TEST nvme_rpc_timeouts 00:13:12.081 ************************************ 00:13:12.081 08:31:51 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.081 08:31:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:12.339 08:31:51 -- spdk/autotest.sh@239 -- # uname -s 00:13:12.339 08:31:51 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:13:12.339 08:31:51 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:12.339 08:31:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:12.339 08:31:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.339 08:31:51 -- common/autotest_common.sh@10 -- # set +x 00:13:12.339 ************************************ 00:13:12.339 START TEST sw_hotplug 00:13:12.339 ************************************ 00:13:12.339 08:31:51 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:12.339 * Looking for test storage... 00:13:12.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:12.339 08:31:51 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:12.339 08:31:51 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:13:12.339 08:31:51 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:12.339 08:31:51 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.339 08:31:51 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:13:12.339 08:31:51 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.339 08:31:51 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:12.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.339 --rc genhtml_branch_coverage=1 00:13:12.339 --rc genhtml_function_coverage=1 00:13:12.339 --rc genhtml_legend=1 00:13:12.339 --rc geninfo_all_blocks=1 00:13:12.339 --rc geninfo_unexecuted_blocks=1 00:13:12.339 00:13:12.339 ' 00:13:12.339 08:31:51 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:12.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.339 --rc genhtml_branch_coverage=1 00:13:12.339 --rc genhtml_function_coverage=1 00:13:12.339 --rc genhtml_legend=1 00:13:12.339 --rc geninfo_all_blocks=1 00:13:12.339 --rc geninfo_unexecuted_blocks=1 00:13:12.339 00:13:12.339 ' 00:13:12.339 08:31:51 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:12.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.339 --rc genhtml_branch_coverage=1 00:13:12.339 --rc genhtml_function_coverage=1 00:13:12.339 --rc genhtml_legend=1 00:13:12.339 --rc geninfo_all_blocks=1 00:13:12.339 --rc geninfo_unexecuted_blocks=1 00:13:12.339 00:13:12.339 ' 00:13:12.339 08:31:51 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:12.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.339 --rc genhtml_branch_coverage=1 00:13:12.339 --rc genhtml_function_coverage=1 00:13:12.339 --rc genhtml_legend=1 00:13:12.339 --rc geninfo_all_blocks=1 00:13:12.339 --rc geninfo_unexecuted_blocks=1 00:13:12.339 00:13:12.339 ' 00:13:12.339 08:31:51 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:12.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:12.942 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:12.942 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:12.942 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:12.942 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:12.942 08:31:52 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:13:12.942 08:31:52 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:13:12.942 08:31:52 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:13:12.942 08:31:52 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@233 -- # local class 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:13:12.943 08:31:52 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:12.943 08:31:52 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:13:12.943 08:31:52 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:12.943 08:31:52 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:13.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:13.521 Waiting for block devices as requested 00:13:13.521 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:13.521 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:13.779 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:13.779 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:19.046 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:19.046 08:31:58 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:19.046 08:31:58 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:19.304 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:19.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:19.304 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:19.619 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:19.895 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:19.895 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:19.895 08:31:59 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:19.895 08:31:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:20.152 08:31:59 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:20.152 08:31:59 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:20.152 08:31:59 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68438 00:13:20.152 08:31:59 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:20.152 08:31:59 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:20.152 08:31:59 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:20.152 08:31:59 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:20.152 08:31:59 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:20.152 08:31:59 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:20.152 08:31:59 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:20.152 08:31:59 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:20.152 08:31:59 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:13:20.152 08:31:59 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:20.152 08:31:59 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:20.152 08:31:59 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:20.152 08:31:59 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:20.152 08:31:59 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:20.410 Initializing NVMe Controllers 00:13:20.410 Attaching to 0000:00:10.0 00:13:20.410 Attaching to 0000:00:11.0 00:13:20.410 Attached to 0000:00:10.0 00:13:20.410 Attached to 0000:00:11.0 00:13:20.410 Initialization complete. Starting I/O... 00:13:20.410 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:20.410 QEMU NVMe Ctrl (12341 ): 2 I/Os completed (+2) 00:13:20.410 00:13:21.345 QEMU NVMe Ctrl (12340 ): 1757 I/Os completed (+1757) 00:13:21.345 QEMU NVMe Ctrl (12341 ): 1914 I/Os completed (+1912) 00:13:21.345 00:13:22.280 QEMU NVMe Ctrl (12340 ): 3123 I/Os completed (+1366) 00:13:22.280 QEMU NVMe Ctrl (12341 ): 3503 I/Os completed (+1589) 00:13:22.280 00:13:23.214 QEMU NVMe Ctrl (12340 ): 5947 I/Os completed (+2824) 00:13:23.214 QEMU NVMe Ctrl (12341 ): 6746 I/Os completed (+3243) 00:13:23.214 00:13:24.586 QEMU NVMe Ctrl (12340 ): 7887 I/Os completed (+1940) 00:13:24.587 QEMU NVMe Ctrl (12341 ): 9464 I/Os completed (+2718) 00:13:24.587 00:13:25.522 QEMU NVMe Ctrl (12340 ): 9854 I/Os completed (+1967) 00:13:25.522 QEMU NVMe Ctrl (12341 ): 12046 I/Os completed (+2582) 00:13:25.522 00:13:26.099 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:26.099 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:26.099 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:26.099 [2024-11-19 08:32:05.232123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:26.099 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:26.100 [2024-11-19 08:32:05.234788] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 [2024-11-19 08:32:05.234873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 [2024-11-19 08:32:05.234909] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 [2024-11-19 08:32:05.234942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:26.100 [2024-11-19 08:32:05.238383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 [2024-11-19 08:32:05.238463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 [2024-11-19 08:32:05.238493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 [2024-11-19 08:32:05.238521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:26.100 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:26.100 [2024-11-19 08:32:05.261534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:26.100 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:26.100 [2024-11-19 08:32:05.263810] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 [2024-11-19 08:32:05.263881] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 [2024-11-19 08:32:05.263923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 [2024-11-19 08:32:05.263952] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:26.100 [2024-11-19 08:32:05.266982] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 [2024-11-19 08:32:05.267047] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 [2024-11-19 08:32:05.267082] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 [2024-11-19 08:32:05.267106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:26.100 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:26.100 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:26.100 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:26.100 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:26.100 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:26.359 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:26.359 00:13:26.359 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:26.359 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:26.359 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:26.359 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:26.359 Attaching to 0000:00:10.0 00:13:26.359 Attached to 0000:00:10.0 00:13:26.359 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:26.359 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:26.359 08:32:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:26.359 Attaching to 0000:00:11.0 00:13:26.359 Attached to 0000:00:11.0 00:13:27.295 QEMU NVMe Ctrl (12340 ): 1905 I/Os completed (+1905) 00:13:27.295 QEMU NVMe Ctrl (12341 ): 2551 I/Os completed (+2551) 00:13:27.295 00:13:28.228 QEMU NVMe Ctrl (12340 ): 3394 I/Os completed (+1489) 00:13:28.228 QEMU NVMe Ctrl (12341 ): 4518 I/Os completed (+1967) 00:13:28.228 00:13:29.601 QEMU NVMe Ctrl (12340 ): 5084 I/Os completed (+1690) 00:13:29.601 QEMU NVMe Ctrl (12341 ): 6437 I/Os completed (+1919) 00:13:29.601 00:13:30.534 QEMU NVMe Ctrl (12340 ): 7050 I/Os completed (+1966) 00:13:30.534 QEMU NVMe Ctrl (12341 ): 9028 I/Os completed (+2591) 00:13:30.534 00:13:31.469 QEMU NVMe Ctrl (12340 ): 8825 I/Os completed (+1775) 00:13:31.469 QEMU NVMe Ctrl (12341 ): 10972 I/Os completed (+1944) 00:13:31.469 00:13:32.402 QEMU NVMe Ctrl (12340 ): 10674 I/Os completed (+1849) 00:13:32.402 QEMU NVMe Ctrl (12341 ): 12902 I/Os completed (+1930) 00:13:32.402 00:13:33.335 QEMU NVMe Ctrl (12340 ): 12280 I/Os completed (+1606) 00:13:33.335 QEMU NVMe Ctrl (12341 ): 14766 I/Os completed (+1864) 00:13:33.335 00:13:34.269 QEMU NVMe Ctrl (12340 ): 14688 I/Os completed (+2408) 00:13:34.269 QEMU NVMe Ctrl (12341 ): 17503 I/Os completed (+2737) 00:13:34.269 00:13:35.203 QEMU NVMe Ctrl (12340 ): 16745 I/Os completed (+2057) 00:13:35.203 QEMU NVMe Ctrl (12341 ): 19833 I/Os completed (+2330) 00:13:35.203 00:13:36.577 QEMU NVMe Ctrl (12340 ): 19571 I/Os completed (+2826) 00:13:36.577 QEMU NVMe Ctrl (12341 ): 23081 I/Os completed (+3248) 00:13:36.577 00:13:37.510 QEMU NVMe Ctrl (12340 ): 23093 I/Os completed (+3522) 00:13:37.510 QEMU NVMe Ctrl (12341 ): 27162 I/Os completed (+4081) 00:13:37.510 00:13:38.442 QEMU NVMe Ctrl (12340 ): 27662 I/Os completed (+4569) 00:13:38.442 QEMU NVMe Ctrl (12341 ): 32359 I/Os completed (+5197) 00:13:38.442 00:13:38.442 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:38.442 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:38.442 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:38.442 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:38.442 [2024-11-19 08:32:17.584575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:38.442 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:38.442 [2024-11-19 08:32:17.587516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 [2024-11-19 08:32:17.587627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 [2024-11-19 08:32:17.587677] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 [2024-11-19 08:32:17.587722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:38.442 [2024-11-19 08:32:17.594488] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 [2024-11-19 08:32:17.594575] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 [2024-11-19 08:32:17.594645] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 [2024-11-19 08:32:17.594691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:38.442 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:38.442 [2024-11-19 08:32:17.625925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:38.442 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:38.442 [2024-11-19 08:32:17.628636] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 [2024-11-19 08:32:17.628718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 [2024-11-19 08:32:17.628774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 [2024-11-19 08:32:17.628813] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:38.442 [2024-11-19 08:32:17.632577] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 [2024-11-19 08:32:17.632664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 [2024-11-19 08:32:17.632708] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 [2024-11-19 08:32:17.632746] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.442 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:38.442 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:38.442 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:38.442 EAL: Scan for (pci) bus failed. 00:13:38.700 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:38.700 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:38.700 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:38.700 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:38.700 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:38.700 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:38.700 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:38.700 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:38.700 Attaching to 0000:00:10.0 00:13:38.700 Attached to 0000:00:10.0 00:13:38.700 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:38.700 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:38.700 08:32:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:38.700 Attaching to 0000:00:11.0 00:13:38.700 Attached to 0000:00:11.0 00:13:39.266 QEMU NVMe Ctrl (12340 ): 2172 I/Os completed (+2172) 00:13:39.266 QEMU NVMe Ctrl (12341 ): 2581 I/Os completed (+2581) 00:13:39.266 00:13:40.199 QEMU NVMe Ctrl (12340 ): 3890 I/Os completed (+1718) 00:13:40.199 QEMU NVMe Ctrl (12341 ): 4617 I/Os completed (+2036) 00:13:40.199 00:13:41.571 QEMU NVMe Ctrl (12340 ): 6742 I/Os completed (+2852) 00:13:41.571 QEMU NVMe Ctrl (12341 ): 8104 I/Os completed (+3487) 00:13:41.571 00:13:42.503 QEMU NVMe Ctrl (12340 ): 9726 I/Os completed (+2984) 00:13:42.503 QEMU NVMe Ctrl (12341 ): 11686 I/Os completed (+3582) 00:13:42.503 00:13:43.436 QEMU NVMe Ctrl (12340 ): 11160 I/Os completed (+1434) 00:13:43.436 QEMU NVMe Ctrl (12341 ): 13358 I/Os completed (+1672) 00:13:43.436 00:13:44.370 QEMU NVMe Ctrl (12340 ): 13319 I/Os completed (+2159) 00:13:44.370 QEMU NVMe Ctrl (12341 ): 16146 I/Os completed (+2788) 00:13:44.370 00:13:45.304 QEMU NVMe Ctrl (12340 ): 16804 I/Os completed (+3485) 00:13:45.304 QEMU NVMe Ctrl (12341 ): 20049 I/Os completed (+3903) 00:13:45.304 00:13:46.237 QEMU NVMe Ctrl (12340 ): 18678 I/Os completed (+1874) 00:13:46.237 QEMU NVMe Ctrl (12341 ): 22232 I/Os completed (+2183) 00:13:46.237 00:13:47.611 QEMU NVMe Ctrl (12340 ): 20584 I/Os completed (+1906) 00:13:47.611 QEMU NVMe Ctrl (12341 ): 24445 I/Os completed (+2213) 00:13:47.611 00:13:48.177 QEMU NVMe Ctrl (12340 ): 23022 I/Os completed (+2438) 00:13:48.177 QEMU NVMe Ctrl (12341 ): 27572 I/Os completed (+3127) 00:13:48.177 00:13:49.550 QEMU NVMe Ctrl (12340 ): 24798 I/Os completed (+1776) 00:13:49.550 QEMU NVMe Ctrl (12341 ): 29940 I/Os completed (+2368) 00:13:49.550 00:13:50.483 QEMU NVMe Ctrl (12340 ): 27450 I/Os completed (+2652) 00:13:50.483 QEMU NVMe Ctrl (12341 ): 33357 I/Os completed (+3417) 00:13:50.483 00:13:50.742 08:32:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:50.742 08:32:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:50.742 08:32:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:50.742 08:32:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:50.742 [2024-11-19 08:32:29.959123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:50.742 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:50.742 [2024-11-19 08:32:29.962782] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 [2024-11-19 08:32:29.962872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 [2024-11-19 08:32:29.962922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 [2024-11-19 08:32:29.962969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:50.742 [2024-11-19 08:32:29.967929] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 [2024-11-19 08:32:29.968018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 [2024-11-19 08:32:29.968068] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 [2024-11-19 08:32:29.968110] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 08:32:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:50.742 08:32:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:50.742 [2024-11-19 08:32:29.994442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:50.742 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:50.742 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/class 00:13:50.742 EAL: Scan for (pci) bus failed. 00:13:50.742 [2024-11-19 08:32:29.996088] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 [2024-11-19 08:32:29.996140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 [2024-11-19 08:32:29.996177] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 [2024-11-19 08:32:29.996201] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 08:32:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:50.742 08:32:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:50.742 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:50.742 [2024-11-19 08:32:29.998697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 [2024-11-19 08:32:29.998748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 [2024-11-19 08:32:29.998778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.742 [2024-11-19 08:32:29.998797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.000 08:32:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:51.000 08:32:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:51.000 08:32:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:51.000 08:32:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:51.000 08:32:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:51.000 08:32:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:51.000 08:32:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:51.000 08:32:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:51.000 Attaching to 0000:00:10.0 00:13:51.000 Attached to 0000:00:10.0 00:13:51.000 08:32:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:51.258 08:32:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:51.258 08:32:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:51.258 Attaching to 0000:00:11.0 00:13:51.258 Attached to 0000:00:11.0 00:13:51.258 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:51.258 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:51.258 [2024-11-19 08:32:30.314835] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:03.456 08:32:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:03.456 08:32:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:03.456 08:32:42 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.07 00:14:03.456 08:32:42 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.07 00:14:03.456 08:32:42 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:03.456 08:32:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.07 00:14:03.456 08:32:42 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.07 2 00:14:03.456 remove_attach_helper took 43.07s to complete (handling 2 nvme drive(s)) 08:32:42 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:14:10.022 08:32:48 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68438 00:14:10.022 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68438) - No such process 00:14:10.022 08:32:48 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68438 00:14:10.022 08:32:48 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:14:10.022 08:32:48 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:14:10.022 08:32:48 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:14:10.022 08:32:48 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68981 00:14:10.022 08:32:48 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:10.022 08:32:48 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:14:10.022 08:32:48 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68981 00:14:10.022 08:32:48 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68981 ']' 00:14:10.022 08:32:48 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.022 08:32:48 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.022 08:32:48 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.022 08:32:48 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.022 08:32:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:10.022 [2024-11-19 08:32:48.416273] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:14:10.022 [2024-11-19 08:32:48.416439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68981 ] 00:14:10.022 [2024-11-19 08:32:48.590584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.022 [2024-11-19 08:32:48.693661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.280 08:32:49 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.280 08:32:49 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:14:10.280 08:32:49 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:10.280 08:32:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.280 08:32:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:10.280 08:32:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.280 08:32:49 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:14:10.280 08:32:49 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:10.280 08:32:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:10.280 08:32:49 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:10.280 08:32:49 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:10.280 08:32:49 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:10.280 08:32:49 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:10.280 08:32:49 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:10.280 08:32:49 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:10.280 08:32:49 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:10.280 08:32:49 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:10.280 08:32:49 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:10.280 08:32:49 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:16.842 08:32:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:16.842 08:32:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:16.842 08:32:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:16.842 08:32:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:16.842 08:32:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:16.842 08:32:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:16.842 08:32:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:16.842 08:32:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:16.842 08:32:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:16.842 08:32:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:16.842 08:32:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:16.842 08:32:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.842 08:32:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:16.842 [2024-11-19 08:32:55.548041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:16.842 [2024-11-19 08:32:55.550812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.842 [2024-11-19 08:32:55.550865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.842 [2024-11-19 08:32:55.550891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.842 [2024-11-19 08:32:55.550938] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.842 [2024-11-19 08:32:55.550962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.842 [2024-11-19 08:32:55.550980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.842 [2024-11-19 08:32:55.550996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.842 [2024-11-19 08:32:55.551012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.842 [2024-11-19 08:32:55.551026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.842 [2024-11-19 08:32:55.551048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.842 [2024-11-19 08:32:55.551062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.842 [2024-11-19 08:32:55.551078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.842 08:32:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.842 08:32:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:16.842 08:32:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:16.842 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:16.842 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:16.842 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:16.842 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:16.842 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:16.842 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:16.842 08:32:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.842 08:32:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:16.842 08:32:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.101 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:17.101 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:17.102 [2024-11-19 08:32:56.248050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:17.102 [2024-11-19 08:32:56.250897] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.102 [2024-11-19 08:32:56.250949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.102 [2024-11-19 08:32:56.250990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.102 [2024-11-19 08:32:56.251017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.102 [2024-11-19 08:32:56.251035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.102 [2024-11-19 08:32:56.251049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.102 [2024-11-19 08:32:56.251066] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.102 [2024-11-19 08:32:56.251082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.102 [2024-11-19 08:32:56.251110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.102 [2024-11-19 08:32:56.251136] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.102 [2024-11-19 08:32:56.251155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.102 [2024-11-19 08:32:56.251169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.361 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:17.361 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:17.361 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:17.361 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:17.361 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:17.361 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:17.361 08:32:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.361 08:32:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:17.620 08:32:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.620 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:17.620 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:17.620 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:17.620 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:17.620 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:17.620 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:17.620 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:17.620 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:17.620 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:17.620 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:17.878 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:17.878 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:17.878 08:32:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:30.080 08:33:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:30.080 08:33:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:30.080 08:33:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:30.080 08:33:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:30.080 08:33:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:30.080 08:33:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:30.080 08:33:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.080 08:33:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:30.080 08:33:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:30.080 [2024-11-19 08:33:09.048298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:30.080 [2024-11-19 08:33:09.051758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.080 [2024-11-19 08:33:09.051816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.080 [2024-11-19 08:33:09.051839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.080 [2024-11-19 08:33:09.051869] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.080 [2024-11-19 08:33:09.051884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.080 [2024-11-19 08:33:09.051901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.080 [2024-11-19 08:33:09.051916] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.080 [2024-11-19 08:33:09.051932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.080 [2024-11-19 08:33:09.051946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.080 [2024-11-19 08:33:09.051962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.080 [2024-11-19 08:33:09.051976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.080 [2024-11-19 08:33:09.051993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:30.080 08:33:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.080 08:33:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:30.080 08:33:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:30.080 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:30.649 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:30.649 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:30.649 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:30.649 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:30.649 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:30.649 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:30.649 08:33:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.649 08:33:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:30.649 08:33:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.649 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:30.649 08:33:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:30.649 [2024-11-19 08:33:09.748314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:30.649 [2024-11-19 08:33:09.751210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.649 [2024-11-19 08:33:09.751267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.649 [2024-11-19 08:33:09.751295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.649 [2024-11-19 08:33:09.751322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.649 [2024-11-19 08:33:09.751340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.649 [2024-11-19 08:33:09.751355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.649 [2024-11-19 08:33:09.751373] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.649 [2024-11-19 08:33:09.751387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.649 [2024-11-19 08:33:09.751403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.649 [2024-11-19 08:33:09.751418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.649 [2024-11-19 08:33:09.751434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.649 [2024-11-19 08:33:09.751448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:31.216 08:33:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.216 08:33:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:31.216 08:33:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:31.216 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:31.475 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:31.475 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:31.475 08:33:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:43.697 08:33:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.697 08:33:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:43.697 08:33:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:43.697 [2024-11-19 08:33:22.648571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:43.697 [2024-11-19 08:33:22.651801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.697 [2024-11-19 08:33:22.651859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.697 [2024-11-19 08:33:22.651883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.697 [2024-11-19 08:33:22.651913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.697 [2024-11-19 08:33:22.651928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.697 [2024-11-19 08:33:22.651947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.697 [2024-11-19 08:33:22.651963] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.697 [2024-11-19 08:33:22.651979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.697 [2024-11-19 08:33:22.651993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.697 [2024-11-19 08:33:22.652009] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.697 [2024-11-19 08:33:22.652023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.697 [2024-11-19 08:33:22.652038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:43.697 08:33:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.697 08:33:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:43.697 08:33:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:43.697 08:33:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:44.264 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:44.264 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:44.264 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:44.264 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:44.264 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:44.264 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:44.264 08:33:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.264 08:33:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:44.264 08:33:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.264 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:44.264 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:44.264 [2024-11-19 08:33:23.348603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:44.264 [2024-11-19 08:33:23.351844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.264 [2024-11-19 08:33:23.351895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.264 [2024-11-19 08:33:23.351923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.264 [2024-11-19 08:33:23.351949] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.264 [2024-11-19 08:33:23.351967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.264 [2024-11-19 08:33:23.351982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.264 [2024-11-19 08:33:23.352001] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.264 [2024-11-19 08:33:23.352015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.264 [2024-11-19 08:33:23.352033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.264 [2024-11-19 08:33:23.352049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:44.264 [2024-11-19 08:33:23.352064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.264 [2024-11-19 08:33:23.352078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.831 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:44.831 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:44.831 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:44.831 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:44.831 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:44.831 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:44.831 08:33:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.831 08:33:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:44.831 08:33:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.831 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:44.831 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:44.831 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:44.831 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:44.831 08:33:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:44.831 08:33:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:44.831 08:33:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:44.831 08:33:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:44.831 08:33:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:44.831 08:33:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:45.089 08:33:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:45.089 08:33:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:45.089 08:33:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@719 -- # time=46.80 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@720 -- # echo 46.80 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.80 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.80 2 00:14:57.292 remove_attach_helper took 46.80s to complete (handling 2 nvme drive(s)) 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:57.292 08:33:36 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:57.292 08:33:36 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:03.887 08:33:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.887 08:33:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:03.887 [2024-11-19 08:33:42.383220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:03.887 [2024-11-19 08:33:42.385164] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.887 [2024-11-19 08:33:42.385220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.887 [2024-11-19 08:33:42.385242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.887 [2024-11-19 08:33:42.385272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.887 [2024-11-19 08:33:42.385288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.887 [2024-11-19 08:33:42.385305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.887 [2024-11-19 08:33:42.385321] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.887 [2024-11-19 08:33:42.385340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.887 [2024-11-19 08:33:42.385360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.887 [2024-11-19 08:33:42.385391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.887 [2024-11-19 08:33:42.385417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.887 [2024-11-19 08:33:42.385439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.887 08:33:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:03.887 [2024-11-19 08:33:42.783241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:03.887 [2024-11-19 08:33:42.785206] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.887 [2024-11-19 08:33:42.785256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.887 [2024-11-19 08:33:42.785281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.887 [2024-11-19 08:33:42.785307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.887 [2024-11-19 08:33:42.785325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.887 [2024-11-19 08:33:42.785340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.887 [2024-11-19 08:33:42.785359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.887 [2024-11-19 08:33:42.785373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.887 [2024-11-19 08:33:42.785392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.887 [2024-11-19 08:33:42.785418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.887 [2024-11-19 08:33:42.785447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.887 [2024-11-19 08:33:42.785466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:03.887 08:33:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.887 08:33:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:03.887 08:33:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:03.887 08:33:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:03.887 08:33:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:03.887 08:33:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:03.887 08:33:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:03.887 08:33:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:03.887 08:33:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:03.887 08:33:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:03.887 08:33:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:03.887 08:33:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:04.146 08:33:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:04.146 08:33:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:04.146 08:33:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:16.353 08:33:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.353 08:33:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:16.353 08:33:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:16.353 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:16.354 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:16.354 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:16.354 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:16.354 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:16.354 08:33:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.354 08:33:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:16.354 08:33:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.354 [2024-11-19 08:33:55.383477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:16.354 [2024-11-19 08:33:55.385492] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.354 [2024-11-19 08:33:55.385577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.354 [2024-11-19 08:33:55.385600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.354 [2024-11-19 08:33:55.385649] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.354 [2024-11-19 08:33:55.385673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.354 [2024-11-19 08:33:55.385690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.354 [2024-11-19 08:33:55.385706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.354 [2024-11-19 08:33:55.385723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.354 [2024-11-19 08:33:55.385737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.354 [2024-11-19 08:33:55.385758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.354 [2024-11-19 08:33:55.385783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.354 [2024-11-19 08:33:55.385810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.354 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:16.354 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:16.612 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:16.612 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:16.612 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:16.612 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:16.612 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:16.612 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:16.613 08:33:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.613 08:33:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:16.871 08:33:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.871 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:16.871 08:33:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:16.871 [2024-11-19 08:33:55.983451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:16.871 [2024-11-19 08:33:55.985366] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.871 [2024-11-19 08:33:55.985415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.871 [2024-11-19 08:33:55.985440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.872 [2024-11-19 08:33:55.985467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.872 [2024-11-19 08:33:55.985488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.872 [2024-11-19 08:33:55.985503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.872 [2024-11-19 08:33:55.985520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.872 [2024-11-19 08:33:55.985533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.872 [2024-11-19 08:33:55.985550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.872 [2024-11-19 08:33:55.985564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:16.872 [2024-11-19 08:33:55.985579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:16.872 [2024-11-19 08:33:55.985594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:17.439 08:33:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.439 08:33:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:17.439 08:33:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:17.439 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:17.697 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:17.697 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:17.697 08:33:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:29.901 08:34:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:29.901 08:34:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:29.901 08:34:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:29.901 08:34:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.901 08:34:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:29.901 08:34:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:29.901 [2024-11-19 08:34:08.983708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:29.901 08:34:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:29.901 [2024-11-19 08:34:08.985580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:29.901 [2024-11-19 08:34:08.985650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.901 [2024-11-19 08:34:08.985675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.901 [2024-11-19 08:34:08.985704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:29.901 [2024-11-19 08:34:08.985720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.901 [2024-11-19 08:34:08.985737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.901 [2024-11-19 08:34:08.985752] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:29.901 [2024-11-19 08:34:08.985771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.901 [2024-11-19 08:34:08.985786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.901 [2024-11-19 08:34:08.985803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:29.901 [2024-11-19 08:34:08.985817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.901 [2024-11-19 08:34:08.985833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.466 08:34:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:30.466 08:34:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:30.466 08:34:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:30.466 08:34:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:30.466 08:34:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:30.466 08:34:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:30.466 08:34:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.466 08:34:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:30.466 08:34:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.466 08:34:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:30.466 08:34:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:30.466 [2024-11-19 08:34:09.583728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:30.466 [2024-11-19 08:34:09.585661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.466 [2024-11-19 08:34:09.585713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.466 [2024-11-19 08:34:09.585739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.466 [2024-11-19 08:34:09.585765] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.466 [2024-11-19 08:34:09.585790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.466 [2024-11-19 08:34:09.585805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.466 [2024-11-19 08:34:09.585824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.466 [2024-11-19 08:34:09.585837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.466 [2024-11-19 08:34:09.585854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.467 [2024-11-19 08:34:09.585869] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:30.467 [2024-11-19 08:34:09.585887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.467 [2024-11-19 08:34:09.585902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:31.033 08:34:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.033 08:34:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:31.033 08:34:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:31.033 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:31.292 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:31.292 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:31.292 08:34:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:43.496 08:34:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:43.496 08:34:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:43.496 08:34:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:43.496 08:34:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:43.496 08:34:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:43.496 08:34:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.496 08:34:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:43.496 08:34:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@719 -- # time=46.14 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@720 -- # echo 46.14 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:43.496 08:34:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.14 00:15:43.496 08:34:22 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.14 2 00:15:43.496 remove_attach_helper took 46.14s to complete (handling 2 nvme drive(s)) 08:34:22 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:15:43.496 08:34:22 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68981 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68981 ']' 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68981 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68981 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.496 killing process with pid 68981 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68981' 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68981 00:15:43.496 08:34:22 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68981 00:15:45.503 08:34:24 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:45.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:46.329 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:46.329 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:46.329 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:46.329 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:46.329 00:15:46.329 real 2m34.121s 00:15:46.329 user 1m55.141s 00:15:46.329 sys 0m19.033s 00:15:46.329 08:34:25 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.329 08:34:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:46.329 ************************************ 00:15:46.329 END TEST sw_hotplug 00:15:46.329 ************************************ 00:15:46.329 08:34:25 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:15:46.329 08:34:25 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:46.329 08:34:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:46.329 08:34:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.329 08:34:25 -- common/autotest_common.sh@10 -- # set +x 00:15:46.329 ************************************ 00:15:46.329 START TEST nvme_xnvme 00:15:46.329 ************************************ 00:15:46.329 08:34:25 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:46.588 * Looking for test storage... 00:15:46.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:46.588 08:34:25 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:46.588 08:34:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:46.588 08:34:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:46.588 08:34:25 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.588 08:34:25 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:46.588 08:34:25 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.588 08:34:25 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:46.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.588 --rc genhtml_branch_coverage=1 00:15:46.588 --rc genhtml_function_coverage=1 00:15:46.588 --rc genhtml_legend=1 00:15:46.588 --rc geninfo_all_blocks=1 00:15:46.588 --rc geninfo_unexecuted_blocks=1 00:15:46.588 00:15:46.588 ' 00:15:46.588 08:34:25 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:46.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.588 --rc genhtml_branch_coverage=1 00:15:46.588 --rc genhtml_function_coverage=1 00:15:46.588 --rc genhtml_legend=1 00:15:46.588 --rc geninfo_all_blocks=1 00:15:46.588 --rc geninfo_unexecuted_blocks=1 00:15:46.588 00:15:46.588 ' 00:15:46.588 08:34:25 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:46.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.588 --rc genhtml_branch_coverage=1 00:15:46.588 --rc genhtml_function_coverage=1 00:15:46.588 --rc genhtml_legend=1 00:15:46.588 --rc geninfo_all_blocks=1 00:15:46.588 --rc geninfo_unexecuted_blocks=1 00:15:46.588 00:15:46.588 ' 00:15:46.588 08:34:25 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:46.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.588 --rc genhtml_branch_coverage=1 00:15:46.588 --rc genhtml_function_coverage=1 00:15:46.588 --rc genhtml_legend=1 00:15:46.589 --rc geninfo_all_blocks=1 00:15:46.589 --rc geninfo_unexecuted_blocks=1 00:15:46.589 00:15:46.589 ' 00:15:46.589 08:34:25 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:46.589 08:34:25 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:46.589 08:34:25 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.589 08:34:25 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.589 08:34:25 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.589 08:34:25 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.589 08:34:25 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.589 08:34:25 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.589 08:34:25 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:46.589 08:34:25 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.589 08:34:25 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:15:46.589 08:34:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:46.589 08:34:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.589 08:34:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:46.589 ************************************ 00:15:46.589 START TEST xnvme_to_malloc_dd_copy 00:15:46.589 ************************************ 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:46.589 08:34:25 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:46.589 { 00:15:46.589 "subsystems": [ 00:15:46.589 { 00:15:46.589 "subsystem": "bdev", 00:15:46.589 "config": [ 00:15:46.589 { 00:15:46.589 "params": { 00:15:46.589 "block_size": 512, 00:15:46.589 "num_blocks": 2097152, 00:15:46.589 "name": "malloc0" 00:15:46.589 }, 00:15:46.589 "method": "bdev_malloc_create" 00:15:46.589 }, 00:15:46.589 { 00:15:46.589 "params": { 00:15:46.589 "io_mechanism": "libaio", 00:15:46.589 "filename": "/dev/nullb0", 00:15:46.589 "name": "null0" 00:15:46.589 }, 00:15:46.589 "method": "bdev_xnvme_create" 00:15:46.589 }, 00:15:46.589 { 00:15:46.589 "method": "bdev_wait_for_examine" 00:15:46.589 } 00:15:46.589 ] 00:15:46.589 } 00:15:46.589 ] 00:15:46.589 } 00:15:46.848 [2024-11-19 08:34:25.921041] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:46.848 [2024-11-19 08:34:25.921230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70384 ] 00:15:46.848 [2024-11-19 08:34:26.105804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.106 [2024-11-19 08:34:26.231051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.638  [2024-11-19T08:34:29.869Z] Copying: 167/1024 [MB] (167 MBps) [2024-11-19T08:34:30.808Z] Copying: 334/1024 [MB] (166 MBps) [2024-11-19T08:34:31.749Z] Copying: 501/1024 [MB] (167 MBps) [2024-11-19T08:34:32.685Z] Copying: 668/1024 [MB] (167 MBps) [2024-11-19T08:34:33.620Z] Copying: 836/1024 [MB] (168 MBps) [2024-11-19T08:34:33.879Z] Copying: 1003/1024 [MB] (166 MBps) [2024-11-19T08:34:37.164Z] Copying: 1024/1024 [MB] (average 167 MBps) 00:15:57.868 00:15:57.868 08:34:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:57.868 08:34:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:57.868 08:34:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:57.868 08:34:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:57.868 { 00:15:57.868 "subsystems": [ 00:15:57.868 { 00:15:57.868 "subsystem": "bdev", 00:15:57.868 "config": [ 00:15:57.868 { 00:15:57.868 "params": { 00:15:57.868 "block_size": 512, 00:15:57.868 "num_blocks": 2097152, 00:15:57.868 "name": "malloc0" 00:15:57.868 }, 00:15:57.868 "method": "bdev_malloc_create" 00:15:57.868 }, 00:15:57.868 { 00:15:57.868 "params": { 00:15:57.868 "io_mechanism": "libaio", 00:15:57.868 "filename": "/dev/nullb0", 00:15:57.868 "name": "null0" 00:15:57.868 }, 00:15:57.868 "method": "bdev_xnvme_create" 00:15:57.868 }, 00:15:57.868 { 00:15:57.868 "method": "bdev_wait_for_examine" 00:15:57.868 } 00:15:57.868 ] 00:15:57.868 } 00:15:57.868 ] 00:15:57.868 } 00:15:57.868 [2024-11-19 08:34:37.148953] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:15:57.868 [2024-11-19 08:34:37.149660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70511 ] 00:15:58.127 [2024-11-19 08:34:37.325904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.385 [2024-11-19 08:34:37.466520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.980  [2024-11-19T08:34:40.842Z] Copying: 162/1024 [MB] (162 MBps) [2024-11-19T08:34:41.777Z] Copying: 325/1024 [MB] (163 MBps) [2024-11-19T08:34:43.153Z] Copying: 490/1024 [MB] (164 MBps) [2024-11-19T08:34:44.087Z] Copying: 653/1024 [MB] (163 MBps) [2024-11-19T08:34:45.022Z] Copying: 818/1024 [MB] (165 MBps) [2024-11-19T08:34:45.022Z] Copying: 982/1024 [MB] (163 MBps) [2024-11-19T08:34:49.260Z] Copying: 1024/1024 [MB] (average 163 MBps) 00:16:09.964 00:16:09.964 08:34:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:16:09.964 08:34:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:09.964 08:34:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:16:09.964 08:34:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:16:09.964 08:34:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:09.964 08:34:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:09.964 { 00:16:09.964 "subsystems": [ 00:16:09.964 { 00:16:09.964 "subsystem": "bdev", 00:16:09.964 "config": [ 00:16:09.964 { 00:16:09.964 "params": { 00:16:09.964 "block_size": 512, 00:16:09.964 "num_blocks": 2097152, 00:16:09.964 "name": "malloc0" 00:16:09.964 }, 00:16:09.964 "method": "bdev_malloc_create" 00:16:09.964 }, 00:16:09.964 { 00:16:09.964 "params": { 00:16:09.964 "io_mechanism": "io_uring", 00:16:09.964 "filename": "/dev/nullb0", 00:16:09.964 "name": "null0" 00:16:09.964 }, 00:16:09.964 "method": "bdev_xnvme_create" 00:16:09.964 }, 00:16:09.964 { 00:16:09.964 "method": "bdev_wait_for_examine" 00:16:09.964 } 00:16:09.964 ] 00:16:09.964 } 00:16:09.964 ] 00:16:09.964 } 00:16:09.964 [2024-11-19 08:34:48.531435] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:09.964 [2024-11-19 08:34:48.531626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70643 ] 00:16:09.964 [2024-11-19 08:34:48.717965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.964 [2024-11-19 08:34:48.874312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.495  [2024-11-19T08:34:52.424Z] Copying: 164/1024 [MB] (164 MBps) [2024-11-19T08:34:53.370Z] Copying: 343/1024 [MB] (179 MBps) [2024-11-19T08:34:54.304Z] Copying: 523/1024 [MB] (179 MBps) [2024-11-19T08:34:55.238Z] Copying: 703/1024 [MB] (180 MBps) [2024-11-19T08:34:56.171Z] Copying: 884/1024 [MB] (180 MBps) [2024-11-19T08:34:59.453Z] Copying: 1024/1024 [MB] (average 177 MBps) 00:16:20.157 00:16:20.157 08:34:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:16:20.157 08:34:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:16:20.157 08:34:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:20.157 08:34:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:20.157 { 00:16:20.157 "subsystems": [ 00:16:20.157 { 00:16:20.157 "subsystem": "bdev", 00:16:20.157 "config": [ 00:16:20.157 { 00:16:20.157 "params": { 00:16:20.157 "block_size": 512, 00:16:20.157 "num_blocks": 2097152, 00:16:20.157 "name": "malloc0" 00:16:20.157 }, 00:16:20.157 "method": "bdev_malloc_create" 00:16:20.157 }, 00:16:20.157 { 00:16:20.157 "params": { 00:16:20.157 "io_mechanism": "io_uring", 00:16:20.157 "filename": "/dev/nullb0", 00:16:20.157 "name": "null0" 00:16:20.157 }, 00:16:20.157 "method": "bdev_xnvme_create" 00:16:20.157 }, 00:16:20.157 { 00:16:20.157 "method": "bdev_wait_for_examine" 00:16:20.157 } 00:16:20.157 ] 00:16:20.157 } 00:16:20.157 ] 00:16:20.157 } 00:16:20.157 [2024-11-19 08:34:59.407415] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:20.157 [2024-11-19 08:34:59.407627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70763 ] 00:16:20.415 [2024-11-19 08:34:59.591511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.415 [2024-11-19 08:34:59.701479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.976  [2024-11-19T08:35:03.205Z] Copying: 171/1024 [MB] (171 MBps) [2024-11-19T08:35:04.140Z] Copying: 352/1024 [MB] (181 MBps) [2024-11-19T08:35:05.075Z] Copying: 531/1024 [MB] (179 MBps) [2024-11-19T08:35:06.451Z] Copying: 711/1024 [MB] (179 MBps) [2024-11-19T08:35:07.018Z] Copying: 888/1024 [MB] (176 MBps) [2024-11-19T08:35:10.300Z] Copying: 1024/1024 [MB] (average 176 MBps) 00:16:31.004 00:16:31.262 08:35:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:16:31.262 08:35:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:16:31.262 00:16:31.262 real 0m44.560s 00:16:31.262 user 0m38.702s 00:16:31.262 sys 0m5.172s 00:16:31.262 08:35:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.262 08:35:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:31.262 ************************************ 00:16:31.262 END TEST xnvme_to_malloc_dd_copy 00:16:31.262 ************************************ 00:16:31.262 08:35:10 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:31.262 08:35:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:31.262 08:35:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.262 08:35:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:31.262 ************************************ 00:16:31.262 START TEST xnvme_bdevperf 00:16:31.262 ************************************ 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:31.262 08:35:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:31.263 08:35:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:31.263 08:35:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:31.263 { 00:16:31.263 "subsystems": [ 00:16:31.263 { 00:16:31.263 "subsystem": "bdev", 00:16:31.263 "config": [ 00:16:31.263 { 00:16:31.263 "params": { 00:16:31.263 "io_mechanism": "libaio", 00:16:31.263 "filename": "/dev/nullb0", 00:16:31.263 "name": "null0" 00:16:31.263 }, 00:16:31.263 "method": "bdev_xnvme_create" 00:16:31.263 }, 00:16:31.263 { 00:16:31.263 "method": "bdev_wait_for_examine" 00:16:31.263 } 00:16:31.263 ] 00:16:31.263 } 00:16:31.263 ] 00:16:31.263 } 00:16:31.263 [2024-11-19 08:35:10.532923] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:31.263 [2024-11-19 08:35:10.533100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70911 ] 00:16:31.521 [2024-11-19 08:35:10.714060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.780 [2024-11-19 08:35:10.870399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.039 Running I/O for 5 seconds... 00:16:33.960 109568.00 IOPS, 428.00 MiB/s [2024-11-19T08:35:14.629Z] 109856.00 IOPS, 429.12 MiB/s [2024-11-19T08:35:15.574Z] 109610.67 IOPS, 428.17 MiB/s [2024-11-19T08:35:16.508Z] 109408.00 IOPS, 427.38 MiB/s 00:16:37.212 Latency(us) 00:16:37.212 [2024-11-19T08:35:16.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.212 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:37.212 null0 : 5.00 109480.99 427.66 0.00 0.00 581.14 161.05 2725.70 00:16:37.212 [2024-11-19T08:35:16.509Z] =================================================================================================================== 00:16:37.213 [2024-11-19T08:35:16.509Z] Total : 109480.99 427.66 0.00 0.00 581.14 161.05 2725.70 00:16:38.210 08:35:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:16:38.210 08:35:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:38.210 08:35:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:16:38.210 08:35:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:16:38.210 08:35:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:38.210 08:35:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:38.210 { 00:16:38.210 "subsystems": [ 00:16:38.210 { 00:16:38.210 "subsystem": "bdev", 00:16:38.210 "config": [ 00:16:38.210 { 00:16:38.210 "params": { 00:16:38.210 "io_mechanism": "io_uring", 00:16:38.210 "filename": "/dev/nullb0", 00:16:38.210 "name": "null0" 00:16:38.210 }, 00:16:38.210 "method": "bdev_xnvme_create" 00:16:38.210 }, 00:16:38.210 { 00:16:38.210 "method": "bdev_wait_for_examine" 00:16:38.210 } 00:16:38.210 ] 00:16:38.210 } 00:16:38.211 ] 00:16:38.211 } 00:16:38.211 [2024-11-19 08:35:17.345629] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:38.211 [2024-11-19 08:35:17.345777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70992 ] 00:16:38.469 [2024-11-19 08:35:17.519739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.469 [2024-11-19 08:35:17.623676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.727 Running I/O for 5 seconds... 00:16:41.035 149888.00 IOPS, 585.50 MiB/s [2024-11-19T08:35:21.265Z] 149856.00 IOPS, 585.38 MiB/s [2024-11-19T08:35:22.200Z] 149802.67 IOPS, 585.17 MiB/s [2024-11-19T08:35:23.136Z] 149824.00 IOPS, 585.25 MiB/s 00:16:43.840 Latency(us) 00:16:43.840 [2024-11-19T08:35:23.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.840 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:43.840 null0 : 5.00 149836.06 585.30 0.00 0.00 423.87 238.31 2323.55 00:16:43.840 [2024-11-19T08:35:23.136Z] =================================================================================================================== 00:16:43.840 [2024-11-19T08:35:23.136Z] Total : 149836.06 585.30 0.00 0.00 423.87 238.31 2323.55 00:16:44.776 08:35:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:16:44.776 08:35:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:16:44.776 00:16:44.776 real 0m13.539s 00:16:44.776 user 0m10.482s 00:16:44.776 sys 0m2.826s 00:16:44.776 08:35:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.776 08:35:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:44.776 ************************************ 00:16:44.776 END TEST xnvme_bdevperf 00:16:44.776 ************************************ 00:16:44.776 00:16:44.776 real 0m58.383s 00:16:44.776 user 0m49.324s 00:16:44.776 sys 0m8.143s 00:16:44.776 08:35:23 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:44.776 08:35:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:44.776 ************************************ 00:16:44.776 END TEST nvme_xnvme 00:16:44.776 ************************************ 00:16:44.776 08:35:24 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:44.776 08:35:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:44.776 08:35:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:44.776 08:35:24 -- common/autotest_common.sh@10 -- # set +x 00:16:44.776 ************************************ 00:16:44.776 START TEST blockdev_xnvme 00:16:44.776 ************************************ 00:16:44.776 08:35:24 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:16:45.035 * Looking for test storage... 00:16:45.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:45.035 08:35:24 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:45.035 08:35:24 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:45.035 08:35:24 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:45.035 08:35:24 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.035 08:35:24 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:16:45.035 08:35:24 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.035 08:35:24 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:45.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.035 --rc genhtml_branch_coverage=1 00:16:45.035 --rc genhtml_function_coverage=1 00:16:45.035 --rc genhtml_legend=1 00:16:45.035 --rc geninfo_all_blocks=1 00:16:45.035 --rc geninfo_unexecuted_blocks=1 00:16:45.035 00:16:45.035 ' 00:16:45.035 08:35:24 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:45.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.035 --rc genhtml_branch_coverage=1 00:16:45.035 --rc genhtml_function_coverage=1 00:16:45.035 --rc genhtml_legend=1 00:16:45.035 --rc geninfo_all_blocks=1 00:16:45.035 --rc geninfo_unexecuted_blocks=1 00:16:45.035 00:16:45.035 ' 00:16:45.035 08:35:24 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:45.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.035 --rc genhtml_branch_coverage=1 00:16:45.035 --rc genhtml_function_coverage=1 00:16:45.035 --rc genhtml_legend=1 00:16:45.035 --rc geninfo_all_blocks=1 00:16:45.035 --rc geninfo_unexecuted_blocks=1 00:16:45.035 00:16:45.035 ' 00:16:45.035 08:35:24 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:45.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.035 --rc genhtml_branch_coverage=1 00:16:45.035 --rc genhtml_function_coverage=1 00:16:45.035 --rc genhtml_legend=1 00:16:45.035 --rc geninfo_all_blocks=1 00:16:45.035 --rc geninfo_unexecuted_blocks=1 00:16:45.035 00:16:45.035 ' 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:45.035 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:16:45.036 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:45.036 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:16:45.036 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:45.036 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:45.036 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:45.036 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:16:45.036 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:16:45.036 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:45.036 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71140 00:16:45.036 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:45.036 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:45.036 08:35:24 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71140 00:16:45.036 08:35:24 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71140 ']' 00:16:45.036 08:35:24 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.036 08:35:24 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.036 08:35:24 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.036 08:35:24 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.036 08:35:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:45.299 [2024-11-19 08:35:24.361885] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:45.299 [2024-11-19 08:35:24.362515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71140 ] 00:16:45.299 [2024-11-19 08:35:24.549242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.560 [2024-11-19 08:35:24.681780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.495 08:35:25 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.495 08:35:25 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:16:46.495 08:35:25 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:46.495 08:35:25 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:16:46.495 08:35:25 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:16:46.495 08:35:25 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:16:46.495 08:35:25 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:46.753 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:46.753 Waiting for block devices as requested 00:16:47.011 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:47.011 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:47.011 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:47.270 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:52.536 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:52.536 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:16:52.536 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:16:52.536 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:16:52.536 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:16:52.536 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:52.536 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:16:52.537 nvme0n1 00:16:52.537 nvme1n1 00:16:52.537 nvme2n1 00:16:52.537 nvme2n2 00:16:52.537 nvme2n3 00:16:52.537 nvme3n1 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:52.537 08:35:31 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:52.537 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:52.538 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "21f71ed3-75ff-4976-b12b-9e967953593e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "21f71ed3-75ff-4976-b12b-9e967953593e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "af803469-f362-4183-8d53-b355208414db"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "af803469-f362-4183-8d53-b355208414db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "578cd6c4-dee6-412a-bd1c-28246ae5e604"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "578cd6c4-dee6-412a-bd1c-28246ae5e604",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c3172fb2-fc4c-4ed4-baae-19460302cd72"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c3172fb2-fc4c-4ed4-baae-19460302cd72",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a31bab33-c26d-4c18-b5ad-3e1363a887d8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a31bab33-c26d-4c18-b5ad-3e1363a887d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "28ac4e7d-60fe-4e6b-8262-64381b97d17a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "28ac4e7d-60fe-4e6b-8262-64381b97d17a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:52.538 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:52.538 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:16:52.538 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:52.538 08:35:31 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71140 00:16:52.538 08:35:31 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71140 ']' 00:16:52.538 08:35:31 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71140 00:16:52.538 08:35:31 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:16:52.538 08:35:31 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.538 08:35:31 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71140 00:16:52.538 killing process with pid 71140 00:16:52.538 08:35:31 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.538 08:35:31 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.538 08:35:31 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71140' 00:16:52.538 08:35:31 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71140 00:16:52.538 08:35:31 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71140 00:16:54.491 08:35:33 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:54.491 08:35:33 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:54.491 08:35:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:54.491 08:35:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.491 08:35:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:54.491 ************************************ 00:16:54.491 START TEST bdev_hello_world 00:16:54.491 ************************************ 00:16:54.491 08:35:33 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:16:54.748 [2024-11-19 08:35:33.839902] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:54.748 [2024-11-19 08:35:33.840283] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71508 ] 00:16:54.748 [2024-11-19 08:35:34.026414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.005 [2024-11-19 08:35:34.151000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.263 [2024-11-19 08:35:34.548446] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:55.263 [2024-11-19 08:35:34.548677] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:16:55.263 [2024-11-19 08:35:34.548714] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:55.263 [2024-11-19 08:35:34.550961] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:55.263 [2024-11-19 08:35:34.551238] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:55.263 [2024-11-19 08:35:34.551267] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:55.263 [2024-11-19 08:35:34.551458] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:55.263 00:16:55.263 [2024-11-19 08:35:34.551493] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:56.638 ************************************ 00:16:56.638 END TEST bdev_hello_world 00:16:56.638 ************************************ 00:16:56.638 00:16:56.638 real 0m1.780s 00:16:56.638 user 0m1.450s 00:16:56.638 sys 0m0.213s 00:16:56.638 08:35:35 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.638 08:35:35 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:56.638 08:35:35 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:56.638 08:35:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:56.638 08:35:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.638 08:35:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:56.638 ************************************ 00:16:56.638 START TEST bdev_bounds 00:16:56.638 ************************************ 00:16:56.638 08:35:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:56.638 08:35:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71545 00:16:56.638 08:35:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:56.638 Process bdevio pid: 71545 00:16:56.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.638 08:35:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:56.638 08:35:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71545' 00:16:56.638 08:35:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71545 00:16:56.638 08:35:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71545 ']' 00:16:56.638 08:35:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.638 08:35:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.638 08:35:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.638 08:35:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.638 08:35:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:56.638 [2024-11-19 08:35:35.670560] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:56.638 [2024-11-19 08:35:35.670751] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71545 ] 00:16:56.638 [2024-11-19 08:35:35.859585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:56.896 [2024-11-19 08:35:35.988139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.896 [2024-11-19 08:35:35.988248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.896 [2024-11-19 08:35:35.988255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.463 08:35:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.463 08:35:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:57.463 08:35:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:57.721 I/O targets: 00:16:57.721 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:16:57.721 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:57.721 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:57.721 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:57.721 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:57.721 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:57.721 00:16:57.721 00:16:57.721 CUnit - A unit testing framework for C - Version 2.1-3 00:16:57.721 http://cunit.sourceforge.net/ 00:16:57.721 00:16:57.721 00:16:57.721 Suite: bdevio tests on: nvme3n1 00:16:57.721 Test: blockdev write read block ...passed 00:16:57.721 Test: blockdev write zeroes read block ...passed 00:16:57.721 Test: blockdev write zeroes read no split ...passed 00:16:57.721 Test: blockdev write zeroes read split ...passed 00:16:57.721 Test: blockdev write zeroes read split partial ...passed 00:16:57.721 Test: blockdev reset ...passed 00:16:57.721 Test: blockdev write read 8 blocks ...passed 00:16:57.721 Test: blockdev write read size > 128k ...passed 00:16:57.721 Test: blockdev write read invalid size ...passed 00:16:57.721 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:57.721 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:57.721 Test: blockdev write read max offset ...passed 00:16:57.721 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:57.721 Test: blockdev writev readv 8 blocks ...passed 00:16:57.721 Test: blockdev writev readv 30 x 1block ...passed 00:16:57.721 Test: blockdev writev readv block ...passed 00:16:57.721 Test: blockdev writev readv size > 128k ...passed 00:16:57.721 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:57.721 Test: blockdev comparev and writev ...passed 00:16:57.721 Test: blockdev nvme passthru rw ...passed 00:16:57.721 Test: blockdev nvme passthru vendor specific ...passed 00:16:57.721 Test: blockdev nvme admin passthru ...passed 00:16:57.721 Test: blockdev copy ...passed 00:16:57.721 Suite: bdevio tests on: nvme2n3 00:16:57.721 Test: blockdev write read block ...passed 00:16:57.721 Test: blockdev write zeroes read block ...passed 00:16:57.721 Test: blockdev write zeroes read no split ...passed 00:16:57.721 Test: blockdev write zeroes read split ...passed 00:16:57.721 Test: blockdev write zeroes read split partial ...passed 00:16:57.721 Test: blockdev reset ...passed 00:16:57.721 Test: blockdev write read 8 blocks ...passed 00:16:57.721 Test: blockdev write read size > 128k ...passed 00:16:57.721 Test: blockdev write read invalid size ...passed 00:16:57.721 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:57.721 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:57.721 Test: blockdev write read max offset ...passed 00:16:57.721 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:57.721 Test: blockdev writev readv 8 blocks ...passed 00:16:57.721 Test: blockdev writev readv 30 x 1block ...passed 00:16:57.721 Test: blockdev writev readv block ...passed 00:16:57.721 Test: blockdev writev readv size > 128k ...passed 00:16:57.721 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:57.721 Test: blockdev comparev and writev ...passed 00:16:57.721 Test: blockdev nvme passthru rw ...passed 00:16:57.721 Test: blockdev nvme passthru vendor specific ...passed 00:16:57.721 Test: blockdev nvme admin passthru ...passed 00:16:57.721 Test: blockdev copy ...passed 00:16:57.721 Suite: bdevio tests on: nvme2n2 00:16:57.721 Test: blockdev write read block ...passed 00:16:57.721 Test: blockdev write zeroes read block ...passed 00:16:57.721 Test: blockdev write zeroes read no split ...passed 00:16:57.722 Test: blockdev write zeroes read split ...passed 00:16:57.722 Test: blockdev write zeroes read split partial ...passed 00:16:57.722 Test: blockdev reset ...passed 00:16:57.722 Test: blockdev write read 8 blocks ...passed 00:16:57.722 Test: blockdev write read size > 128k ...passed 00:16:57.722 Test: blockdev write read invalid size ...passed 00:16:57.722 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:57.722 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:57.722 Test: blockdev write read max offset ...passed 00:16:57.722 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:57.722 Test: blockdev writev readv 8 blocks ...passed 00:16:57.722 Test: blockdev writev readv 30 x 1block ...passed 00:16:57.722 Test: blockdev writev readv block ...passed 00:16:57.722 Test: blockdev writev readv size > 128k ...passed 00:16:57.722 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:57.722 Test: blockdev comparev and writev ...passed 00:16:57.722 Test: blockdev nvme passthru rw ...passed 00:16:57.722 Test: blockdev nvme passthru vendor specific ...passed 00:16:57.722 Test: blockdev nvme admin passthru ...passed 00:16:57.722 Test: blockdev copy ...passed 00:16:57.722 Suite: bdevio tests on: nvme2n1 00:16:57.722 Test: blockdev write read block ...passed 00:16:57.722 Test: blockdev write zeroes read block ...passed 00:16:57.722 Test: blockdev write zeroes read no split ...passed 00:16:57.722 Test: blockdev write zeroes read split ...passed 00:16:57.980 Test: blockdev write zeroes read split partial ...passed 00:16:57.980 Test: blockdev reset ...passed 00:16:57.980 Test: blockdev write read 8 blocks ...passed 00:16:57.980 Test: blockdev write read size > 128k ...passed 00:16:57.980 Test: blockdev write read invalid size ...passed 00:16:57.980 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:57.980 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:57.980 Test: blockdev write read max offset ...passed 00:16:57.980 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:57.980 Test: blockdev writev readv 8 blocks ...passed 00:16:57.980 Test: blockdev writev readv 30 x 1block ...passed 00:16:57.980 Test: blockdev writev readv block ...passed 00:16:57.980 Test: blockdev writev readv size > 128k ...passed 00:16:57.980 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:57.980 Test: blockdev comparev and writev ...passed 00:16:57.980 Test: blockdev nvme passthru rw ...passed 00:16:57.980 Test: blockdev nvme passthru vendor specific ...passed 00:16:57.980 Test: blockdev nvme admin passthru ...passed 00:16:57.980 Test: blockdev copy ...passed 00:16:57.980 Suite: bdevio tests on: nvme1n1 00:16:57.980 Test: blockdev write read block ...passed 00:16:57.980 Test: blockdev write zeroes read block ...passed 00:16:57.980 Test: blockdev write zeroes read no split ...passed 00:16:57.980 Test: blockdev write zeroes read split ...passed 00:16:57.980 Test: blockdev write zeroes read split partial ...passed 00:16:57.980 Test: blockdev reset ...passed 00:16:57.981 Test: blockdev write read 8 blocks ...passed 00:16:57.981 Test: blockdev write read size > 128k ...passed 00:16:57.981 Test: blockdev write read invalid size ...passed 00:16:57.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:57.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:57.981 Test: blockdev write read max offset ...passed 00:16:57.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:57.981 Test: blockdev writev readv 8 blocks ...passed 00:16:57.981 Test: blockdev writev readv 30 x 1block ...passed 00:16:57.981 Test: blockdev writev readv block ...passed 00:16:57.981 Test: blockdev writev readv size > 128k ...passed 00:16:57.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:57.981 Test: blockdev comparev and writev ...passed 00:16:57.981 Test: blockdev nvme passthru rw ...passed 00:16:57.981 Test: blockdev nvme passthru vendor specific ...passed 00:16:57.981 Test: blockdev nvme admin passthru ...passed 00:16:57.981 Test: blockdev copy ...passed 00:16:57.981 Suite: bdevio tests on: nvme0n1 00:16:57.981 Test: blockdev write read block ...passed 00:16:57.981 Test: blockdev write zeroes read block ...passed 00:16:57.981 Test: blockdev write zeroes read no split ...passed 00:16:57.981 Test: blockdev write zeroes read split ...passed 00:16:57.981 Test: blockdev write zeroes read split partial ...passed 00:16:57.981 Test: blockdev reset ...passed 00:16:57.981 Test: blockdev write read 8 blocks ...passed 00:16:57.981 Test: blockdev write read size > 128k ...passed 00:16:57.981 Test: blockdev write read invalid size ...passed 00:16:57.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:57.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:57.981 Test: blockdev write read max offset ...passed 00:16:57.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:57.981 Test: blockdev writev readv 8 blocks ...passed 00:16:57.981 Test: blockdev writev readv 30 x 1block ...passed 00:16:57.981 Test: blockdev writev readv block ...passed 00:16:57.981 Test: blockdev writev readv size > 128k ...passed 00:16:57.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:57.981 Test: blockdev comparev and writev ...passed 00:16:57.981 Test: blockdev nvme passthru rw ...passed 00:16:57.981 Test: blockdev nvme passthru vendor specific ...passed 00:16:57.981 Test: blockdev nvme admin passthru ...passed 00:16:57.981 Test: blockdev copy ...passed 00:16:57.981 00:16:57.981 Run Summary: Type Total Ran Passed Failed Inactive 00:16:57.981 suites 6 6 n/a 0 0 00:16:57.981 tests 138 138 138 0 0 00:16:57.981 asserts 780 780 780 0 n/a 00:16:57.981 00:16:57.981 Elapsed time = 1.154 seconds 00:16:57.981 0 00:16:57.981 08:35:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71545 00:16:57.981 08:35:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71545 ']' 00:16:57.981 08:35:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71545 00:16:57.981 08:35:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:57.981 08:35:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.981 08:35:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71545 00:16:57.981 killing process with pid 71545 00:16:57.981 08:35:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.981 08:35:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.981 08:35:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71545' 00:16:57.981 08:35:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71545 00:16:57.981 08:35:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71545 00:16:59.354 08:35:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:59.354 00:16:59.354 real 0m2.643s 00:16:59.354 user 0m6.708s 00:16:59.354 sys 0m0.369s 00:16:59.354 08:35:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.354 ************************************ 00:16:59.354 END TEST bdev_bounds 00:16:59.354 ************************************ 00:16:59.354 08:35:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:59.354 08:35:38 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:16:59.354 08:35:38 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:59.354 08:35:38 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.354 08:35:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:59.354 ************************************ 00:16:59.354 START TEST bdev_nbd 00:16:59.354 ************************************ 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71605 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71605 /var/tmp/spdk-nbd.sock 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 71605 ']' 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:59.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.354 08:35:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:59.354 [2024-11-19 08:35:38.365813] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:16:59.354 [2024-11-19 08:35:38.366261] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.354 [2024-11-19 08:35:38.547844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.612 [2024-11-19 08:35:38.651902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:00.178 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:00.436 1+0 records in 00:17:00.436 1+0 records out 00:17:00.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576944 s, 7.1 MB/s 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:00.436 08:35:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.046 1+0 records in 00:17:01.046 1+0 records out 00:17:01.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000795303 s, 5.2 MB/s 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:01.046 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.331 1+0 records in 00:17:01.331 1+0 records out 00:17:01.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508898 s, 8.0 MB/s 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:01.331 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.590 1+0 records in 00:17:01.590 1+0 records out 00:17:01.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051308 s, 8.0 MB/s 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:01.590 08:35:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.849 1+0 records in 00:17:01.849 1+0 records out 00:17:01.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062132 s, 6.6 MB/s 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:01.849 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.415 1+0 records in 00:17:02.415 1+0 records out 00:17:02.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000801207 s, 5.1 MB/s 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:02.415 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:02.674 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:02.674 { 00:17:02.674 "nbd_device": "/dev/nbd0", 00:17:02.674 "bdev_name": "nvme0n1" 00:17:02.674 }, 00:17:02.674 { 00:17:02.674 "nbd_device": "/dev/nbd1", 00:17:02.674 "bdev_name": "nvme1n1" 00:17:02.674 }, 00:17:02.674 { 00:17:02.674 "nbd_device": "/dev/nbd2", 00:17:02.674 "bdev_name": "nvme2n1" 00:17:02.674 }, 00:17:02.674 { 00:17:02.674 "nbd_device": "/dev/nbd3", 00:17:02.674 "bdev_name": "nvme2n2" 00:17:02.674 }, 00:17:02.674 { 00:17:02.674 "nbd_device": "/dev/nbd4", 00:17:02.674 "bdev_name": "nvme2n3" 00:17:02.674 }, 00:17:02.674 { 00:17:02.674 "nbd_device": "/dev/nbd5", 00:17:02.674 "bdev_name": "nvme3n1" 00:17:02.674 } 00:17:02.674 ]' 00:17:02.674 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:02.674 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:02.674 { 00:17:02.674 "nbd_device": "/dev/nbd0", 00:17:02.674 "bdev_name": "nvme0n1" 00:17:02.674 }, 00:17:02.674 { 00:17:02.674 "nbd_device": "/dev/nbd1", 00:17:02.674 "bdev_name": "nvme1n1" 00:17:02.674 }, 00:17:02.674 { 00:17:02.674 "nbd_device": "/dev/nbd2", 00:17:02.674 "bdev_name": "nvme2n1" 00:17:02.674 }, 00:17:02.674 { 00:17:02.674 "nbd_device": "/dev/nbd3", 00:17:02.674 "bdev_name": "nvme2n2" 00:17:02.674 }, 00:17:02.674 { 00:17:02.674 "nbd_device": "/dev/nbd4", 00:17:02.674 "bdev_name": "nvme2n3" 00:17:02.674 }, 00:17:02.674 { 00:17:02.674 "nbd_device": "/dev/nbd5", 00:17:02.674 "bdev_name": "nvme3n1" 00:17:02.674 } 00:17:02.674 ]' 00:17:02.674 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:02.674 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:02.674 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:02.674 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:02.674 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:02.674 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:02.674 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.674 08:35:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:02.933 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:02.933 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:02.933 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:02.933 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.933 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.933 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:02.933 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:02.933 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.933 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:02.933 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:03.191 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:03.191 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:03.191 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:03.191 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.191 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.191 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:03.191 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:03.191 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.191 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.191 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:03.450 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:03.450 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:03.450 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:03.450 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.450 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.450 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:03.450 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:03.450 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.450 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.450 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:03.709 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:03.709 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:03.709 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:03.709 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.709 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.709 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:03.709 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:03.709 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.709 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.709 08:35:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:03.968 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:03.968 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:03.968 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:03.968 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.968 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.968 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:03.968 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:03.968 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.968 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.968 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:04.226 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:04.226 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:04.226 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:04.226 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.226 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.226 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:04.226 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:04.226 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.226 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:04.226 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.226 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:04.485 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:04.485 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:04.485 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:04.744 08:35:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:17:05.003 /dev/nbd0 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.003 1+0 records in 00:17:05.003 1+0 records out 00:17:05.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572917 s, 7.1 MB/s 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:05.003 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:17:05.262 /dev/nbd1 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.262 1+0 records in 00:17:05.262 1+0 records out 00:17:05.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655515 s, 6.2 MB/s 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:05.262 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:17:05.521 /dev/nbd10 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.521 1+0 records in 00:17:05.521 1+0 records out 00:17:05.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453938 s, 9.0 MB/s 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:05.521 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:17:05.779 /dev/nbd11 00:17:05.779 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:05.779 08:35:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:05.779 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:17:05.779 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:05.779 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.779 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.779 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:17:05.779 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:05.779 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.779 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.779 08:35:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.779 1+0 records in 00:17:05.779 1+0 records out 00:17:05.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568224 s, 7.2 MB/s 00:17:05.779 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.779 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:05.779 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.779 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.779 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:05.779 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.779 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:05.779 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:17:06.038 /dev/nbd12 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.038 1+0 records in 00:17:06.038 1+0 records out 00:17:06.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000812028 s, 5.0 MB/s 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:06.038 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:17:06.296 /dev/nbd13 00:17:06.296 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:06.296 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:06.296 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:17:06.296 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:06.296 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.297 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.297 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:17:06.297 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:06.297 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:06.297 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:06.297 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:06.555 1+0 records in 00:17:06.555 1+0 records out 00:17:06.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687687 s, 6.0 MB/s 00:17:06.555 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.555 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:06.555 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:06.555 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:06.555 08:35:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:06.555 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:06.555 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:06.555 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:06.555 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:06.555 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:06.814 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:06.814 { 00:17:06.814 "nbd_device": "/dev/nbd0", 00:17:06.814 "bdev_name": "nvme0n1" 00:17:06.814 }, 00:17:06.814 { 00:17:06.814 "nbd_device": "/dev/nbd1", 00:17:06.814 "bdev_name": "nvme1n1" 00:17:06.814 }, 00:17:06.814 { 00:17:06.814 "nbd_device": "/dev/nbd10", 00:17:06.814 "bdev_name": "nvme2n1" 00:17:06.814 }, 00:17:06.814 { 00:17:06.814 "nbd_device": "/dev/nbd11", 00:17:06.814 "bdev_name": "nvme2n2" 00:17:06.814 }, 00:17:06.814 { 00:17:06.814 "nbd_device": "/dev/nbd12", 00:17:06.814 "bdev_name": "nvme2n3" 00:17:06.814 }, 00:17:06.814 { 00:17:06.814 "nbd_device": "/dev/nbd13", 00:17:06.814 "bdev_name": "nvme3n1" 00:17:06.814 } 00:17:06.814 ]' 00:17:06.814 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:06.814 { 00:17:06.814 "nbd_device": "/dev/nbd0", 00:17:06.814 "bdev_name": "nvme0n1" 00:17:06.814 }, 00:17:06.814 { 00:17:06.814 "nbd_device": "/dev/nbd1", 00:17:06.814 "bdev_name": "nvme1n1" 00:17:06.814 }, 00:17:06.814 { 00:17:06.814 "nbd_device": "/dev/nbd10", 00:17:06.814 "bdev_name": "nvme2n1" 00:17:06.814 }, 00:17:06.814 { 00:17:06.814 "nbd_device": "/dev/nbd11", 00:17:06.814 "bdev_name": "nvme2n2" 00:17:06.814 }, 00:17:06.814 { 00:17:06.814 "nbd_device": "/dev/nbd12", 00:17:06.814 "bdev_name": "nvme2n3" 00:17:06.814 }, 00:17:06.814 { 00:17:06.814 "nbd_device": "/dev/nbd13", 00:17:06.814 "bdev_name": "nvme3n1" 00:17:06.815 } 00:17:06.815 ]' 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:06.815 /dev/nbd1 00:17:06.815 /dev/nbd10 00:17:06.815 /dev/nbd11 00:17:06.815 /dev/nbd12 00:17:06.815 /dev/nbd13' 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:06.815 /dev/nbd1 00:17:06.815 /dev/nbd10 00:17:06.815 /dev/nbd11 00:17:06.815 /dev/nbd12 00:17:06.815 /dev/nbd13' 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:06.815 256+0 records in 00:17:06.815 256+0 records out 00:17:06.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00725793 s, 144 MB/s 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:06.815 08:35:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:07.073 256+0 records in 00:17:07.073 256+0 records out 00:17:07.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134925 s, 7.8 MB/s 00:17:07.073 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:07.073 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:07.073 256+0 records in 00:17:07.073 256+0 records out 00:17:07.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167986 s, 6.2 MB/s 00:17:07.073 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:07.073 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:07.354 256+0 records in 00:17:07.354 256+0 records out 00:17:07.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151029 s, 6.9 MB/s 00:17:07.354 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:07.354 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:07.354 256+0 records in 00:17:07.354 256+0 records out 00:17:07.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147642 s, 7.1 MB/s 00:17:07.354 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:07.354 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:07.622 256+0 records in 00:17:07.622 256+0 records out 00:17:07.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145375 s, 7.2 MB/s 00:17:07.622 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:07.622 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:07.622 256+0 records in 00:17:07.622 256+0 records out 00:17:07.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142963 s, 7.3 MB/s 00:17:07.622 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:07.622 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:07.622 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:07.622 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:07.622 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:07.622 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:07.622 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:07.622 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:07.622 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:07.622 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:07.622 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.881 08:35:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:08.140 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:08.140 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:08.140 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:08.140 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.140 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.140 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:08.140 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.140 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.140 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.140 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:08.398 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:08.398 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:08.398 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:08.398 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.398 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.398 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:08.399 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.399 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.399 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.399 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:08.656 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:08.656 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:08.656 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:08.656 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.656 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.657 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:08.657 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.657 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.657 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.657 08:35:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:08.915 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:08.915 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:08.915 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:08.915 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.915 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.915 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:08.915 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:08.915 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.915 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.915 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:09.173 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:09.173 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:09.173 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:09.173 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.173 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.173 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:09.173 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:09.173 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.173 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:09.173 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:09.431 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:09.431 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:09.431 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:09.431 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.431 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.431 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:09.431 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:09.431 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.431 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:09.431 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:09.431 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:09.690 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:09.690 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:09.690 08:35:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:09.949 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:09.949 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:09.949 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:09.949 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:09.949 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:09.949 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:09.949 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:09.949 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:09.949 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:09.949 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:09.949 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:09.949 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:09.949 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:10.207 malloc_lvol_verify 00:17:10.207 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:10.465 3cba4658-04f1-4c60-9b27-c9c9cfc8ae6d 00:17:10.465 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:10.723 c4ea4ec9-ab4d-4e1c-981d-3557e329f0b7 00:17:10.723 08:35:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:10.981 /dev/nbd0 00:17:10.981 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:10.981 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:10.981 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:10.981 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:10.981 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:10.981 mke2fs 1.47.0 (5-Feb-2023) 00:17:10.981 Discarding device blocks: 0/4096 done 00:17:10.981 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:10.981 00:17:10.981 Allocating group tables: 0/1 done 00:17:10.981 Writing inode tables: 0/1 done 00:17:10.981 Creating journal (1024 blocks): done 00:17:10.981 Writing superblocks and filesystem accounting information: 0/1 done 00:17:10.981 00:17:10.981 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:10.981 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:10.981 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:10.981 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:10.981 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:10.981 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:10.981 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71605 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 71605 ']' 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 71605 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71605 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.238 killing process with pid 71605 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71605' 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 71605 00:17:11.238 08:35:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 71605 00:17:12.612 ************************************ 00:17:12.612 END TEST bdev_nbd 00:17:12.612 ************************************ 00:17:12.612 08:35:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:12.612 00:17:12.612 real 0m13.243s 00:17:12.612 user 0m19.113s 00:17:12.612 sys 0m4.203s 00:17:12.612 08:35:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.612 08:35:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:12.612 08:35:51 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:12.612 08:35:51 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:17:12.612 08:35:51 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:17:12.612 08:35:51 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:12.612 08:35:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.612 08:35:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.612 08:35:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.612 ************************************ 00:17:12.612 START TEST bdev_fio 00:17:12.612 ************************************ 00:17:12.612 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:12.612 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:12.613 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:12.613 ************************************ 00:17:12.613 START TEST bdev_fio_rw_verify 00:17:12.613 ************************************ 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:12.613 08:35:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:12.872 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.872 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.872 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.872 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.872 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.872 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:12.872 fio-3.35 00:17:12.872 Starting 6 threads 00:17:25.079 00:17:25.079 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72036: Tue Nov 19 08:36:02 2024 00:17:25.079 read: IOPS=27.6k, BW=108MiB/s (113MB/s)(1079MiB/10001msec) 00:17:25.079 slat (usec): min=3, max=1729, avg= 7.00, stdev= 5.01 00:17:25.079 clat (usec): min=131, max=6504, avg=660.44, stdev=273.21 00:17:25.079 lat (usec): min=140, max=6515, avg=667.44, stdev=273.84 00:17:25.079 clat percentiles (usec): 00:17:25.079 | 50.000th=[ 660], 99.000th=[ 1287], 99.900th=[ 2900], 99.990th=[ 6325], 00:17:25.079 | 99.999th=[ 6456] 00:17:25.079 write: IOPS=28.0k, BW=110MiB/s (115MB/s)(1096MiB/10001msec); 0 zone resets 00:17:25.079 slat (usec): min=14, max=2597, avg=29.14, stdev=32.32 00:17:25.079 clat (usec): min=103, max=6905, avg=755.68, stdev=280.87 00:17:25.079 lat (usec): min=132, max=6927, avg=784.83, stdev=283.54 00:17:25.079 clat percentiles (usec): 00:17:25.079 | 50.000th=[ 758], 99.000th=[ 1467], 99.900th=[ 2966], 99.990th=[ 4555], 00:17:25.079 | 99.999th=[ 6521] 00:17:25.079 bw ( KiB/s): min=94895, max=142294, per=100.00%, avg=112428.11, stdev=2439.21, samples=114 00:17:25.079 iops : min=23723, max=35573, avg=28106.58, stdev=609.82, samples=114 00:17:25.079 lat (usec) : 250=2.52%, 500=20.44%, 750=33.78%, 1000=31.94% 00:17:25.079 lat (msec) : 2=11.06%, 4=0.22%, 10=0.03% 00:17:25.079 cpu : usr=59.25%, sys=26.73%, ctx=7573, majf=0, minf=23811 00:17:25.079 IO depths : 1=12.0%, 2=24.4%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:25.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.079 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.079 issued rwts: total=276330,280502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.079 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:25.079 00:17:25.079 Run status group 0 (all jobs): 00:17:25.079 READ: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=1079MiB (1132MB), run=10001-10001msec 00:17:25.079 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1096MiB (1149MB), run=10001-10001msec 00:17:25.079 ----------------------------------------------------- 00:17:25.079 Suppressions used: 00:17:25.079 count bytes template 00:17:25.079 6 48 /usr/src/fio/parse.c 00:17:25.079 3974 381504 /usr/src/fio/iolog.c 00:17:25.079 1 8 libtcmalloc_minimal.so 00:17:25.079 1 904 libcrypto.so 00:17:25.079 ----------------------------------------------------- 00:17:25.079 00:17:25.079 00:17:25.079 real 0m12.426s 00:17:25.079 user 0m37.481s 00:17:25.079 sys 0m16.401s 00:17:25.079 08:36:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.079 08:36:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:25.079 ************************************ 00:17:25.079 END TEST bdev_fio_rw_verify 00:17:25.079 ************************************ 00:17:25.079 08:36:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:25.079 08:36:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.079 08:36:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:25.079 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.079 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:25.079 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "21f71ed3-75ff-4976-b12b-9e967953593e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "21f71ed3-75ff-4976-b12b-9e967953593e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "af803469-f362-4183-8d53-b355208414db"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "af803469-f362-4183-8d53-b355208414db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "578cd6c4-dee6-412a-bd1c-28246ae5e604"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "578cd6c4-dee6-412a-bd1c-28246ae5e604",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c3172fb2-fc4c-4ed4-baae-19460302cd72"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c3172fb2-fc4c-4ed4-baae-19460302cd72",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a31bab33-c26d-4c18-b5ad-3e1363a887d8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a31bab33-c26d-4c18-b5ad-3e1363a887d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "28ac4e7d-60fe-4e6b-8262-64381b97d17a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "28ac4e7d-60fe-4e6b-8262-64381b97d17a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:25.080 /home/vagrant/spdk_repo/spdk 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:25.080 00:17:25.080 real 0m12.641s 00:17:25.080 user 0m37.592s 00:17:25.080 sys 0m16.493s 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.080 08:36:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:25.080 ************************************ 00:17:25.080 END TEST bdev_fio 00:17:25.080 ************************************ 00:17:25.080 08:36:04 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:25.080 08:36:04 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:25.080 08:36:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:25.080 08:36:04 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.080 08:36:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:25.080 ************************************ 00:17:25.080 START TEST bdev_verify 00:17:25.080 ************************************ 00:17:25.080 08:36:04 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:25.080 [2024-11-19 08:36:04.353596] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:25.080 [2024-11-19 08:36:04.353779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72213 ] 00:17:25.339 [2024-11-19 08:36:04.540205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:25.597 [2024-11-19 08:36:04.670399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.597 [2024-11-19 08:36:04.670404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.855 Running I/O for 5 seconds... 00:17:28.167 21904.00 IOPS, 85.56 MiB/s [2024-11-19T08:36:08.840Z] 21856.00 IOPS, 85.38 MiB/s [2024-11-19T08:36:09.406Z] 22069.33 IOPS, 86.21 MiB/s [2024-11-19T08:36:10.340Z] 22080.00 IOPS, 86.25 MiB/s 00:17:31.044 Latency(us) 00:17:31.044 [2024-11-19T08:36:10.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.044 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.044 Verification LBA range: start 0x0 length 0xa0000 00:17:31.044 nvme0n1 : 5.03 1653.05 6.46 0.00 0.00 77270.70 12511.42 70063.94 00:17:31.044 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.044 Verification LBA range: start 0xa0000 length 0xa0000 00:17:31.044 nvme0n1 : 5.06 1592.38 6.22 0.00 0.00 80211.58 13107.20 73876.95 00:17:31.044 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.044 Verification LBA range: start 0x0 length 0xbd0bd 00:17:31.044 nvme1n1 : 5.05 2704.46 10.56 0.00 0.00 46994.97 3961.95 67680.81 00:17:31.044 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.044 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:31.044 nvme1n1 : 5.05 2619.08 10.23 0.00 0.00 48567.69 5064.15 64821.06 00:17:31.044 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.044 Verification LBA range: start 0x0 length 0x80000 00:17:31.044 nvme2n1 : 5.06 1670.73 6.53 0.00 0.00 76179.31 5749.29 73400.32 00:17:31.044 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.044 Verification LBA range: start 0x80000 length 0x80000 00:17:31.044 nvme2n1 : 5.07 1616.06 6.31 0.00 0.00 78533.19 8757.99 70540.57 00:17:31.044 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.044 Verification LBA range: start 0x0 length 0x80000 00:17:31.044 nvme2n2 : 5.04 1651.15 6.45 0.00 0.00 76950.01 15490.33 67204.19 00:17:31.044 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.044 Verification LBA range: start 0x80000 length 0x80000 00:17:31.044 nvme2n2 : 5.06 1595.21 6.23 0.00 0.00 79387.43 12034.79 81502.95 00:17:31.044 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.044 Verification LBA range: start 0x0 length 0x80000 00:17:31.044 nvme2n3 : 5.06 1668.75 6.52 0.00 0.00 76009.59 3589.59 61961.31 00:17:31.044 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.044 Verification LBA range: start 0x80000 length 0x80000 00:17:31.044 nvme2n3 : 5.06 1594.20 6.23 0.00 0.00 79311.17 10485.76 75306.82 00:17:31.044 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:31.044 Verification LBA range: start 0x0 length 0x20000 00:17:31.044 nvme3n1 : 5.06 1668.16 6.52 0.00 0.00 75886.39 4468.36 72923.69 00:17:31.044 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:31.044 Verification LBA range: start 0x20000 length 0x20000 00:17:31.044 nvme3n1 : 5.07 1614.55 6.31 0.00 0.00 78258.62 2338.44 84839.33 00:17:31.044 [2024-11-19T08:36:10.340Z] =================================================================================================================== 00:17:31.044 [2024-11-19T08:36:10.340Z] Total : 21647.77 84.56 0.00 0.00 70399.92 2338.44 84839.33 00:17:31.979 00:17:31.979 real 0m7.006s 00:17:31.979 user 0m11.152s 00:17:31.979 sys 0m1.660s 00:17:31.979 08:36:11 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.979 ************************************ 00:17:31.979 08:36:11 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:31.979 END TEST bdev_verify 00:17:31.979 ************************************ 00:17:32.237 08:36:11 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:32.237 08:36:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:32.237 08:36:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.237 08:36:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:32.237 ************************************ 00:17:32.237 START TEST bdev_verify_big_io 00:17:32.237 ************************************ 00:17:32.237 08:36:11 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:32.237 [2024-11-19 08:36:11.394633] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:32.237 [2024-11-19 08:36:11.394777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72318 ] 00:17:32.496 [2024-11-19 08:36:11.566669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:32.496 [2024-11-19 08:36:11.672638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.496 [2024-11-19 08:36:11.672653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.063 Running I/O for 5 seconds... 00:17:38.878 670.00 IOPS, 41.88 MiB/s [2024-11-19T08:36:18.432Z] 2272.00 IOPS, 142.00 MiB/s [2024-11-19T08:36:18.432Z] 2792.00 IOPS, 174.50 MiB/s 00:17:39.136 Latency(us) 00:17:39.136 [2024-11-19T08:36:18.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.137 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.137 Verification LBA range: start 0x0 length 0xa000 00:17:39.137 nvme0n1 : 6.01 114.46 7.15 0.00 0.00 1079251.14 129642.12 1647217.57 00:17:39.137 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.137 Verification LBA range: start 0xa000 length 0xa000 00:17:39.137 nvme0n1 : 5.97 131.40 8.21 0.00 0.00 910767.41 91988.71 1029510.98 00:17:39.137 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.137 Verification LBA range: start 0x0 length 0xbd0b 00:17:39.137 nvme1n1 : 6.01 161.93 10.12 0.00 0.00 741219.44 9115.46 781665.75 00:17:39.137 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.137 Verification LBA range: start 0xbd0b length 0xbd0b 00:17:39.137 nvme1n1 : 5.97 136.64 8.54 0.00 0.00 862756.91 7685.59 1898875.81 00:17:39.137 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.137 Verification LBA range: start 0x0 length 0x8000 00:17:39.137 nvme2n1 : 6.06 113.57 7.10 0.00 0.00 1035506.90 94848.47 1448941.38 00:17:39.137 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.137 Verification LBA range: start 0x8000 length 0x8000 00:17:39.137 nvme2n1 : 5.99 130.79 8.17 0.00 0.00 895300.34 20137.43 1159153.11 00:17:39.137 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.137 Verification LBA range: start 0x0 length 0x8000 00:17:39.137 nvme2n2 : 6.02 128.86 8.05 0.00 0.00 872125.08 138221.38 1006632.96 00:17:39.137 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.137 Verification LBA range: start 0x8000 length 0x8000 00:17:39.137 nvme2n2 : 6.00 117.26 7.33 0.00 0.00 978841.76 3068.28 2181038.08 00:17:39.137 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.137 Verification LBA range: start 0x0 length 0x8000 00:17:39.137 nvme2n3 : 6.04 82.18 5.14 0.00 0.00 1340431.25 65774.31 3614727.45 00:17:39.137 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.137 Verification LBA range: start 0x8000 length 0x8000 00:17:39.137 nvme2n3 : 5.98 104.33 6.52 0.00 0.00 1057342.78 78166.57 2516582.40 00:17:39.137 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:39.137 Verification LBA range: start 0x0 length 0x2000 00:17:39.137 nvme3n1 : 6.05 124.37 7.77 0.00 0.00 860897.20 16562.73 2303054.20 00:17:39.137 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:39.137 Verification LBA range: start 0x2000 length 0x2000 00:17:39.137 nvme3n1 : 6.00 114.69 7.17 0.00 0.00 939269.35 10009.13 1830241.75 00:17:39.137 [2024-11-19T08:36:18.433Z] =================================================================================================================== 00:17:39.137 [2024-11-19T08:36:18.433Z] Total : 1460.48 91.28 0.00 0.00 943810.06 3068.28 3614727.45 00:17:40.522 00:17:40.522 real 0m8.294s 00:17:40.522 user 0m15.143s 00:17:40.522 sys 0m0.507s 00:17:40.522 08:36:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.522 08:36:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:40.522 ************************************ 00:17:40.522 END TEST bdev_verify_big_io 00:17:40.522 ************************************ 00:17:40.522 08:36:19 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:40.522 08:36:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:40.522 08:36:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.522 08:36:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:40.522 ************************************ 00:17:40.522 START TEST bdev_write_zeroes 00:17:40.522 ************************************ 00:17:40.522 08:36:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:40.522 [2024-11-19 08:36:19.753725] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:40.522 [2024-11-19 08:36:19.753905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72430 ] 00:17:40.781 [2024-11-19 08:36:19.933342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.781 [2024-11-19 08:36:20.058690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.349 Running I/O for 1 seconds... 00:17:42.304 60655.00 IOPS, 236.93 MiB/s 00:17:42.304 Latency(us) 00:17:42.304 [2024-11-19T08:36:21.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.304 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.304 nvme0n1 : 1.04 9116.70 35.61 0.00 0.00 14024.28 8638.84 28597.53 00:17:42.304 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.304 nvme1n1 : 1.03 13865.81 54.16 0.00 0.00 9212.45 3232.12 17635.14 00:17:42.304 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.304 nvme2n1 : 1.03 9158.48 35.78 0.00 0.00 13867.57 8281.37 30980.65 00:17:42.304 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.304 nvme2n2 : 1.04 9145.13 35.72 0.00 0.00 13865.89 8519.68 30980.65 00:17:42.304 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.304 nvme2n3 : 1.04 9131.98 35.67 0.00 0.00 13862.92 8340.95 31218.97 00:17:42.304 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:42.304 nvme3n1 : 1.04 9225.93 36.04 0.00 0.00 13708.23 6762.12 31457.28 00:17:42.304 [2024-11-19T08:36:21.600Z] =================================================================================================================== 00:17:42.304 [2024-11-19T08:36:21.600Z] Total : 59644.03 232.98 0.00 0.00 12787.40 3232.12 31457.28 00:17:43.678 00:17:43.678 real 0m2.891s 00:17:43.678 user 0m2.185s 00:17:43.678 sys 0m0.522s 00:17:43.678 08:36:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.678 08:36:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:43.678 ************************************ 00:17:43.678 END TEST bdev_write_zeroes 00:17:43.678 ************************************ 00:17:43.678 08:36:22 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:43.678 08:36:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:43.678 08:36:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.678 08:36:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:43.678 ************************************ 00:17:43.678 START TEST bdev_json_nonenclosed 00:17:43.678 ************************************ 00:17:43.678 08:36:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:43.678 [2024-11-19 08:36:22.699232] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:43.678 [2024-11-19 08:36:22.699402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72480 ] 00:17:43.678 [2024-11-19 08:36:22.887868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.936 [2024-11-19 08:36:23.013661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.936 [2024-11-19 08:36:23.013780] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:43.936 [2024-11-19 08:36:23.013813] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:43.936 [2024-11-19 08:36:23.013830] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:44.195 00:17:44.195 real 0m0.693s 00:17:44.195 user 0m0.458s 00:17:44.195 sys 0m0.130s 00:17:44.195 08:36:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.195 08:36:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:44.195 ************************************ 00:17:44.195 END TEST bdev_json_nonenclosed 00:17:44.195 ************************************ 00:17:44.195 08:36:23 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:44.195 08:36:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:17:44.195 08:36:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.195 08:36:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:44.195 ************************************ 00:17:44.195 START TEST bdev_json_nonarray 00:17:44.195 ************************************ 00:17:44.195 08:36:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:44.195 [2024-11-19 08:36:23.428918] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:44.195 [2024-11-19 08:36:23.429068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72511 ] 00:17:44.453 [2024-11-19 08:36:23.613035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.453 [2024-11-19 08:36:23.739683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.453 [2024-11-19 08:36:23.739818] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:44.453 [2024-11-19 08:36:23.739853] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:44.453 [2024-11-19 08:36:23.739870] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:45.020 00:17:45.020 real 0m0.682s 00:17:45.020 user 0m0.445s 00:17:45.020 sys 0m0.130s 00:17:45.020 08:36:24 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.020 08:36:24 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:45.020 ************************************ 00:17:45.020 END TEST bdev_json_nonarray 00:17:45.020 ************************************ 00:17:45.020 08:36:24 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:17:45.020 08:36:24 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:17:45.020 08:36:24 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:17:45.020 08:36:24 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:45.020 08:36:24 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:17:45.020 08:36:24 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:45.020 08:36:24 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:45.020 08:36:24 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:17:45.020 08:36:24 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:17:45.020 08:36:24 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:17:45.020 08:36:24 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:17:45.020 08:36:24 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:45.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:49.463 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:49.463 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:49.463 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:49.463 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:49.463 00:17:49.463 real 1m4.478s 00:17:49.463 user 1m45.071s 00:17:49.463 sys 0m32.607s 00:17:49.463 08:36:28 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.463 08:36:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:49.463 ************************************ 00:17:49.463 END TEST blockdev_xnvme 00:17:49.463 ************************************ 00:17:49.463 08:36:28 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:49.463 08:36:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:49.463 08:36:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.463 08:36:28 -- common/autotest_common.sh@10 -- # set +x 00:17:49.463 ************************************ 00:17:49.463 START TEST ublk 00:17:49.463 ************************************ 00:17:49.463 08:36:28 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:17:49.463 * Looking for test storage... 00:17:49.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:49.463 08:36:28 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:49.463 08:36:28 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:17:49.463 08:36:28 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:49.463 08:36:28 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:49.463 08:36:28 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.463 08:36:28 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.463 08:36:28 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.463 08:36:28 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.463 08:36:28 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.463 08:36:28 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.463 08:36:28 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.463 08:36:28 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.463 08:36:28 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.463 08:36:28 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.463 08:36:28 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.463 08:36:28 ublk -- scripts/common.sh@344 -- # case "$op" in 00:17:49.463 08:36:28 ublk -- scripts/common.sh@345 -- # : 1 00:17:49.463 08:36:28 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.463 08:36:28 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.463 08:36:28 ublk -- scripts/common.sh@365 -- # decimal 1 00:17:49.463 08:36:28 ublk -- scripts/common.sh@353 -- # local d=1 00:17:49.463 08:36:28 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.463 08:36:28 ublk -- scripts/common.sh@355 -- # echo 1 00:17:49.463 08:36:28 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.463 08:36:28 ublk -- scripts/common.sh@366 -- # decimal 2 00:17:49.463 08:36:28 ublk -- scripts/common.sh@353 -- # local d=2 00:17:49.463 08:36:28 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.463 08:36:28 ublk -- scripts/common.sh@355 -- # echo 2 00:17:49.463 08:36:28 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.463 08:36:28 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.463 08:36:28 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.463 08:36:28 ublk -- scripts/common.sh@368 -- # return 0 00:17:49.463 08:36:28 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.463 08:36:28 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:49.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.463 --rc genhtml_branch_coverage=1 00:17:49.463 --rc genhtml_function_coverage=1 00:17:49.463 --rc genhtml_legend=1 00:17:49.463 --rc geninfo_all_blocks=1 00:17:49.463 --rc geninfo_unexecuted_blocks=1 00:17:49.463 00:17:49.463 ' 00:17:49.463 08:36:28 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:49.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.463 --rc genhtml_branch_coverage=1 00:17:49.463 --rc genhtml_function_coverage=1 00:17:49.463 --rc genhtml_legend=1 00:17:49.463 --rc geninfo_all_blocks=1 00:17:49.463 --rc geninfo_unexecuted_blocks=1 00:17:49.463 00:17:49.463 ' 00:17:49.463 08:36:28 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:49.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.463 --rc genhtml_branch_coverage=1 00:17:49.463 --rc genhtml_function_coverage=1 00:17:49.463 --rc genhtml_legend=1 00:17:49.463 --rc geninfo_all_blocks=1 00:17:49.463 --rc geninfo_unexecuted_blocks=1 00:17:49.463 00:17:49.463 ' 00:17:49.463 08:36:28 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:49.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.463 --rc genhtml_branch_coverage=1 00:17:49.463 --rc genhtml_function_coverage=1 00:17:49.463 --rc genhtml_legend=1 00:17:49.463 --rc geninfo_all_blocks=1 00:17:49.463 --rc geninfo_unexecuted_blocks=1 00:17:49.463 00:17:49.463 ' 00:17:49.463 08:36:28 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:49.463 08:36:28 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:49.463 08:36:28 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:49.463 08:36:28 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:49.463 08:36:28 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:49.463 08:36:28 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:49.463 08:36:28 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:49.463 08:36:28 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:49.463 08:36:28 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:49.463 08:36:28 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:17:49.463 08:36:28 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:17:49.463 08:36:28 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:17:49.463 08:36:28 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:17:49.463 08:36:28 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:17:49.463 08:36:28 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:17:49.463 08:36:28 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:17:49.463 08:36:28 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:17:49.463 08:36:28 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:17:49.463 08:36:28 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:17:49.463 08:36:28 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:17:49.463 08:36:28 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:49.463 08:36:28 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.463 08:36:28 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:49.464 ************************************ 00:17:49.464 START TEST test_save_ublk_config 00:17:49.464 ************************************ 00:17:49.722 08:36:28 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:17:49.722 08:36:28 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:17:49.722 08:36:28 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72805 00:17:49.722 08:36:28 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:17:49.722 08:36:28 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72805 00:17:49.722 08:36:28 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:17:49.722 08:36:28 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 72805 ']' 00:17:49.722 08:36:28 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.722 08:36:28 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.722 08:36:28 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.722 08:36:28 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.722 08:36:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:49.722 [2024-11-19 08:36:28.861224] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:49.722 [2024-11-19 08:36:28.861375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72805 ] 00:17:49.980 [2024-11-19 08:36:29.042264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.980 [2024-11-19 08:36:29.177365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.919 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.919 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:50.919 08:36:30 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:17:50.919 08:36:30 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:17:50.919 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.919 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:50.919 [2024-11-19 08:36:30.022647] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:50.919 [2024-11-19 08:36:30.023838] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:50.919 malloc0 00:17:50.919 [2024-11-19 08:36:30.095919] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:50.919 [2024-11-19 08:36:30.096020] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:50.919 [2024-11-19 08:36:30.096037] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:50.919 [2024-11-19 08:36:30.096047] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:50.919 [2024-11-19 08:36:30.100832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:50.919 [2024-11-19 08:36:30.100866] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:50.919 [2024-11-19 08:36:30.102535] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:50.919 [2024-11-19 08:36:30.102681] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:50.919 [2024-11-19 08:36:30.122727] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:50.919 0 00:17:50.919 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.919 08:36:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:17:50.920 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.920 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:51.179 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.179 08:36:30 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:17:51.179 "subsystems": [ 00:17:51.179 { 00:17:51.179 "subsystem": "fsdev", 00:17:51.179 "config": [ 00:17:51.179 { 00:17:51.179 "method": "fsdev_set_opts", 00:17:51.179 "params": { 00:17:51.179 "fsdev_io_pool_size": 65535, 00:17:51.179 "fsdev_io_cache_size": 256 00:17:51.179 } 00:17:51.179 } 00:17:51.179 ] 00:17:51.179 }, 00:17:51.179 { 00:17:51.179 "subsystem": "keyring", 00:17:51.179 "config": [] 00:17:51.179 }, 00:17:51.179 { 00:17:51.179 "subsystem": "iobuf", 00:17:51.179 "config": [ 00:17:51.179 { 00:17:51.179 "method": "iobuf_set_options", 00:17:51.179 "params": { 00:17:51.179 "small_pool_count": 8192, 00:17:51.179 "large_pool_count": 1024, 00:17:51.179 "small_bufsize": 8192, 00:17:51.179 "large_bufsize": 135168, 00:17:51.179 "enable_numa": false 00:17:51.179 } 00:17:51.179 } 00:17:51.179 ] 00:17:51.179 }, 00:17:51.179 { 00:17:51.179 "subsystem": "sock", 00:17:51.179 "config": [ 00:17:51.179 { 00:17:51.179 "method": "sock_set_default_impl", 00:17:51.179 "params": { 00:17:51.179 "impl_name": "posix" 00:17:51.179 } 00:17:51.179 }, 00:17:51.179 { 00:17:51.179 "method": "sock_impl_set_options", 00:17:51.179 "params": { 00:17:51.179 "impl_name": "ssl", 00:17:51.179 "recv_buf_size": 4096, 00:17:51.179 "send_buf_size": 4096, 00:17:51.179 "enable_recv_pipe": true, 00:17:51.179 "enable_quickack": false, 00:17:51.179 "enable_placement_id": 0, 00:17:51.179 "enable_zerocopy_send_server": true, 00:17:51.179 "enable_zerocopy_send_client": false, 00:17:51.179 "zerocopy_threshold": 0, 00:17:51.179 "tls_version": 0, 00:17:51.179 "enable_ktls": false 00:17:51.179 } 00:17:51.179 }, 00:17:51.179 { 00:17:51.179 "method": "sock_impl_set_options", 00:17:51.179 "params": { 00:17:51.179 "impl_name": "posix", 00:17:51.179 "recv_buf_size": 2097152, 00:17:51.179 "send_buf_size": 2097152, 00:17:51.179 "enable_recv_pipe": true, 00:17:51.179 "enable_quickack": false, 00:17:51.179 "enable_placement_id": 0, 00:17:51.179 "enable_zerocopy_send_server": true, 00:17:51.179 "enable_zerocopy_send_client": false, 00:17:51.179 "zerocopy_threshold": 0, 00:17:51.179 "tls_version": 0, 00:17:51.179 "enable_ktls": false 00:17:51.179 } 00:17:51.179 } 00:17:51.179 ] 00:17:51.179 }, 00:17:51.179 { 00:17:51.179 "subsystem": "vmd", 00:17:51.179 "config": [] 00:17:51.179 }, 00:17:51.179 { 00:17:51.179 "subsystem": "accel", 00:17:51.179 "config": [ 00:17:51.179 { 00:17:51.179 "method": "accel_set_options", 00:17:51.179 "params": { 00:17:51.179 "small_cache_size": 128, 00:17:51.179 "large_cache_size": 16, 00:17:51.179 "task_count": 2048, 00:17:51.179 "sequence_count": 2048, 00:17:51.179 "buf_count": 2048 00:17:51.179 } 00:17:51.179 } 00:17:51.179 ] 00:17:51.179 }, 00:17:51.179 { 00:17:51.179 "subsystem": "bdev", 00:17:51.179 "config": [ 00:17:51.179 { 00:17:51.179 "method": "bdev_set_options", 00:17:51.179 "params": { 00:17:51.179 "bdev_io_pool_size": 65535, 00:17:51.179 "bdev_io_cache_size": 256, 00:17:51.179 "bdev_auto_examine": true, 00:17:51.179 "iobuf_small_cache_size": 128, 00:17:51.179 "iobuf_large_cache_size": 16 00:17:51.179 } 00:17:51.179 }, 00:17:51.179 { 00:17:51.179 "method": "bdev_raid_set_options", 00:17:51.179 "params": { 00:17:51.179 "process_window_size_kb": 1024, 00:17:51.179 "process_max_bandwidth_mb_sec": 0 00:17:51.179 } 00:17:51.179 }, 00:17:51.179 { 00:17:51.179 "method": "bdev_iscsi_set_options", 00:17:51.179 "params": { 00:17:51.179 "timeout_sec": 30 00:17:51.179 } 00:17:51.179 }, 00:17:51.179 { 00:17:51.179 "method": "bdev_nvme_set_options", 00:17:51.179 "params": { 00:17:51.179 "action_on_timeout": "none", 00:17:51.179 "timeout_us": 0, 00:17:51.179 "timeout_admin_us": 0, 00:17:51.179 "keep_alive_timeout_ms": 10000, 00:17:51.179 "arbitration_burst": 0, 00:17:51.179 "low_priority_weight": 0, 00:17:51.179 "medium_priority_weight": 0, 00:17:51.179 "high_priority_weight": 0, 00:17:51.179 "nvme_adminq_poll_period_us": 10000, 00:17:51.179 "nvme_ioq_poll_period_us": 0, 00:17:51.179 "io_queue_requests": 0, 00:17:51.179 "delay_cmd_submit": true, 00:17:51.179 "transport_retry_count": 4, 00:17:51.179 "bdev_retry_count": 3, 00:17:51.179 "transport_ack_timeout": 0, 00:17:51.179 "ctrlr_loss_timeout_sec": 0, 00:17:51.179 "reconnect_delay_sec": 0, 00:17:51.179 "fast_io_fail_timeout_sec": 0, 00:17:51.179 "disable_auto_failback": false, 00:17:51.179 "generate_uuids": false, 00:17:51.179 "transport_tos": 0, 00:17:51.179 "nvme_error_stat": false, 00:17:51.179 "rdma_srq_size": 0, 00:17:51.179 "io_path_stat": false, 00:17:51.179 "allow_accel_sequence": false, 00:17:51.179 "rdma_max_cq_size": 0, 00:17:51.179 "rdma_cm_event_timeout_ms": 0, 00:17:51.179 "dhchap_digests": [ 00:17:51.179 "sha256", 00:17:51.179 "sha384", 00:17:51.179 "sha512" 00:17:51.179 ], 00:17:51.179 "dhchap_dhgroups": [ 00:17:51.179 "null", 00:17:51.179 "ffdhe2048", 00:17:51.179 "ffdhe3072", 00:17:51.179 "ffdhe4096", 00:17:51.179 "ffdhe6144", 00:17:51.179 "ffdhe8192" 00:17:51.179 ] 00:17:51.179 } 00:17:51.179 }, 00:17:51.179 { 00:17:51.179 "method": "bdev_nvme_set_hotplug", 00:17:51.179 "params": { 00:17:51.179 "period_us": 100000, 00:17:51.179 "enable": false 00:17:51.179 } 00:17:51.179 }, 00:17:51.179 { 00:17:51.179 "method": "bdev_malloc_create", 00:17:51.179 "params": { 00:17:51.179 "name": "malloc0", 00:17:51.179 "num_blocks": 8192, 00:17:51.179 "block_size": 4096, 00:17:51.179 "physical_block_size": 4096, 00:17:51.179 "uuid": "8fc03ff7-96ed-4358-b0b3-7d3565f1f0c4", 00:17:51.179 "optimal_io_boundary": 0, 00:17:51.179 "md_size": 0, 00:17:51.179 "dif_type": 0, 00:17:51.180 "dif_is_head_of_md": false, 00:17:51.180 "dif_pi_format": 0 00:17:51.180 } 00:17:51.180 }, 00:17:51.180 { 00:17:51.180 "method": "bdev_wait_for_examine" 00:17:51.180 } 00:17:51.180 ] 00:17:51.180 }, 00:17:51.180 { 00:17:51.180 "subsystem": "scsi", 00:17:51.180 "config": null 00:17:51.180 }, 00:17:51.180 { 00:17:51.180 "subsystem": "scheduler", 00:17:51.180 "config": [ 00:17:51.180 { 00:17:51.180 "method": "framework_set_scheduler", 00:17:51.180 "params": { 00:17:51.180 "name": "static" 00:17:51.180 } 00:17:51.180 } 00:17:51.180 ] 00:17:51.180 }, 00:17:51.180 { 00:17:51.180 "subsystem": "vhost_scsi", 00:17:51.180 "config": [] 00:17:51.180 }, 00:17:51.180 { 00:17:51.180 "subsystem": "vhost_blk", 00:17:51.180 "config": [] 00:17:51.180 }, 00:17:51.180 { 00:17:51.180 "subsystem": "ublk", 00:17:51.180 "config": [ 00:17:51.180 { 00:17:51.180 "method": "ublk_create_target", 00:17:51.180 "params": { 00:17:51.180 "cpumask": "1" 00:17:51.180 } 00:17:51.180 }, 00:17:51.180 { 00:17:51.180 "method": "ublk_start_disk", 00:17:51.180 "params": { 00:17:51.180 "bdev_name": "malloc0", 00:17:51.180 "ublk_id": 0, 00:17:51.180 "num_queues": 1, 00:17:51.180 "queue_depth": 128 00:17:51.180 } 00:17:51.180 } 00:17:51.180 ] 00:17:51.180 }, 00:17:51.180 { 00:17:51.180 "subsystem": "nbd", 00:17:51.180 "config": [] 00:17:51.180 }, 00:17:51.180 { 00:17:51.180 "subsystem": "nvmf", 00:17:51.180 "config": [ 00:17:51.180 { 00:17:51.180 "method": "nvmf_set_config", 00:17:51.180 "params": { 00:17:51.180 "discovery_filter": "match_any", 00:17:51.180 "admin_cmd_passthru": { 00:17:51.180 "identify_ctrlr": false 00:17:51.180 }, 00:17:51.180 "dhchap_digests": [ 00:17:51.180 "sha256", 00:17:51.180 "sha384", 00:17:51.180 "sha512" 00:17:51.180 ], 00:17:51.180 "dhchap_dhgroups": [ 00:17:51.180 "null", 00:17:51.180 "ffdhe2048", 00:17:51.180 "ffdhe3072", 00:17:51.180 "ffdhe4096", 00:17:51.180 "ffdhe6144", 00:17:51.180 "ffdhe8192" 00:17:51.180 ] 00:17:51.180 } 00:17:51.180 }, 00:17:51.180 { 00:17:51.180 "method": "nvmf_set_max_subsystems", 00:17:51.180 "params": { 00:17:51.180 "max_subsystems": 1024 00:17:51.180 } 00:17:51.180 }, 00:17:51.180 { 00:17:51.180 "method": "nvmf_set_crdt", 00:17:51.180 "params": { 00:17:51.180 "crdt1": 0, 00:17:51.180 "crdt2": 0, 00:17:51.180 "crdt3": 0 00:17:51.180 } 00:17:51.180 } 00:17:51.180 ] 00:17:51.180 }, 00:17:51.180 { 00:17:51.180 "subsystem": "iscsi", 00:17:51.180 "config": [ 00:17:51.180 { 00:17:51.180 "method": "iscsi_set_options", 00:17:51.180 "params": { 00:17:51.180 "node_base": "iqn.2016-06.io.spdk", 00:17:51.180 "max_sessions": 128, 00:17:51.180 "max_connections_per_session": 2, 00:17:51.180 "max_queue_depth": 64, 00:17:51.180 "default_time2wait": 2, 00:17:51.180 "default_time2retain": 20, 00:17:51.180 "first_burst_length": 8192, 00:17:51.180 "immediate_data": true, 00:17:51.180 "allow_duplicated_isid": false, 00:17:51.180 "error_recovery_level": 0, 00:17:51.180 "nop_timeout": 60, 00:17:51.180 "nop_in_interval": 30, 00:17:51.180 "disable_chap": false, 00:17:51.180 "require_chap": false, 00:17:51.180 "mutual_chap": false, 00:17:51.180 "chap_group": 0, 00:17:51.180 "max_large_datain_per_connection": 64, 00:17:51.180 "max_r2t_per_connection": 4, 00:17:51.180 "pdu_pool_size": 36864, 00:17:51.180 "immediate_data_pool_size": 16384, 00:17:51.180 "data_out_pool_size": 2048 00:17:51.180 } 00:17:51.180 } 00:17:51.180 ] 00:17:51.180 } 00:17:51.180 ] 00:17:51.180 }' 00:17:51.180 08:36:30 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72805 00:17:51.180 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 72805 ']' 00:17:51.180 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 72805 00:17:51.180 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:51.180 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.180 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72805 00:17:51.180 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.180 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.180 killing process with pid 72805 00:17:51.180 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72805' 00:17:51.180 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 72805 00:17:51.180 08:36:30 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 72805 00:17:53.081 [2024-11-19 08:36:32.008746] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:53.081 [2024-11-19 08:36:32.052663] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:53.081 [2024-11-19 08:36:32.052845] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:53.081 [2024-11-19 08:36:32.062670] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:53.081 [2024-11-19 08:36:32.062732] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:53.081 [2024-11-19 08:36:32.062752] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:53.081 [2024-11-19 08:36:32.062783] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:53.081 [2024-11-19 08:36:32.062970] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:54.982 08:36:33 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72872 00:17:54.982 08:36:33 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72872 00:17:54.982 08:36:33 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 72872 ']' 00:17:54.982 08:36:33 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.982 08:36:33 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.982 08:36:33 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.982 08:36:33 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:17:54.982 "subsystems": [ 00:17:54.982 { 00:17:54.982 "subsystem": "fsdev", 00:17:54.982 "config": [ 00:17:54.982 { 00:17:54.982 "method": "fsdev_set_opts", 00:17:54.982 "params": { 00:17:54.982 "fsdev_io_pool_size": 65535, 00:17:54.982 "fsdev_io_cache_size": 256 00:17:54.982 } 00:17:54.982 } 00:17:54.982 ] 00:17:54.982 }, 00:17:54.982 { 00:17:54.982 "subsystem": "keyring", 00:17:54.982 "config": [] 00:17:54.982 }, 00:17:54.982 { 00:17:54.982 "subsystem": "iobuf", 00:17:54.982 "config": [ 00:17:54.982 { 00:17:54.982 "method": "iobuf_set_options", 00:17:54.982 "params": { 00:17:54.982 "small_pool_count": 8192, 00:17:54.982 "large_pool_count": 1024, 00:17:54.982 "small_bufsize": 8192, 00:17:54.982 "large_bufsize": 135168, 00:17:54.982 "enable_numa": false 00:17:54.982 } 00:17:54.982 } 00:17:54.982 ] 00:17:54.982 }, 00:17:54.982 { 00:17:54.982 "subsystem": "sock", 00:17:54.982 "config": [ 00:17:54.982 { 00:17:54.982 "method": "sock_set_default_impl", 00:17:54.982 "params": { 00:17:54.982 "impl_name": "posix" 00:17:54.982 } 00:17:54.982 }, 00:17:54.982 { 00:17:54.982 "method": "sock_impl_set_options", 00:17:54.982 "params": { 00:17:54.982 "impl_name": "ssl", 00:17:54.982 "recv_buf_size": 4096, 00:17:54.982 "send_buf_size": 4096, 00:17:54.982 "enable_recv_pipe": true, 00:17:54.982 "enable_quickack": false, 00:17:54.982 "enable_placement_id": 0, 00:17:54.982 "enable_zerocopy_send_server": true, 00:17:54.982 "enable_zerocopy_send_client": false, 00:17:54.982 "zerocopy_threshold": 0, 00:17:54.982 "tls_version": 0, 00:17:54.982 "enable_ktls": false 00:17:54.982 } 00:17:54.982 }, 00:17:54.982 { 00:17:54.982 "method": "sock_impl_set_options", 00:17:54.982 "params": { 00:17:54.982 "impl_name": "posix", 00:17:54.982 "recv_buf_size": 2097152, 00:17:54.982 "send_buf_size": 2097152, 00:17:54.982 "enable_recv_pipe": true, 00:17:54.982 "enable_quickack": false, 00:17:54.982 "enable_placement_id": 0, 00:17:54.982 "enable_zerocopy_send_server": true, 00:17:54.982 "enable_zerocopy_send_client": false, 00:17:54.982 "zerocopy_threshold": 0, 00:17:54.982 "tls_version": 0, 00:17:54.982 "enable_ktls": false 00:17:54.982 } 00:17:54.982 } 00:17:54.982 ] 00:17:54.982 }, 00:17:54.982 { 00:17:54.982 "subsystem": "vmd", 00:17:54.982 "config": [] 00:17:54.982 }, 00:17:54.982 { 00:17:54.982 "subsystem": "accel", 00:17:54.982 "config": [ 00:17:54.982 { 00:17:54.982 "method": "accel_set_options", 00:17:54.982 "params": { 00:17:54.982 "small_cache_size": 128, 00:17:54.982 "large_cache_size": 16, 00:17:54.982 "task_count": 2048, 00:17:54.982 "sequence_count": 2048, 00:17:54.982 "buf_count": 2048 00:17:54.982 } 00:17:54.982 } 00:17:54.982 ] 00:17:54.982 }, 00:17:54.982 { 00:17:54.982 "subsystem": "bdev", 00:17:54.982 "config": [ 00:17:54.982 { 00:17:54.982 "method": "bdev_set_options", 00:17:54.982 "params": { 00:17:54.982 "bdev_io_pool_size": 65535, 00:17:54.982 "bdev_io_cache_size": 256, 00:17:54.982 "bdev_auto_examine": true, 00:17:54.982 "iobuf_small_cache_size": 128, 00:17:54.982 "iobuf_large_cache_size": 16 00:17:54.982 } 00:17:54.982 }, 00:17:54.982 { 00:17:54.982 "method": "bdev_raid_set_options", 00:17:54.982 "params": { 00:17:54.982 "process_window_size_kb": 1024, 00:17:54.982 "process_max_bandwidth_mb_sec": 0 00:17:54.982 } 00:17:54.982 }, 00:17:54.982 { 00:17:54.982 "method": "bdev_iscsi_set_options", 00:17:54.982 "params": { 00:17:54.982 "timeout_sec": 30 00:17:54.982 } 00:17:54.982 }, 00:17:54.982 { 00:17:54.982 "method": "bdev_nvme_set_options", 00:17:54.982 "params": { 00:17:54.982 "action_on_timeout": "none", 00:17:54.982 "timeout_us": 0, 00:17:54.982 "timeout_admin_us": 0, 00:17:54.982 "keep_alive_timeout_ms": 10000, 00:17:54.982 "arbitration_burst": 0, 00:17:54.982 "low_priority_weight": 0, 00:17:54.982 "medium_priority_weight": 0, 00:17:54.982 "high_priority_weight": 0, 00:17:54.982 "nvme_adminq_poll_period_us": 10000, 00:17:54.982 "nvme_ioq_poll_period_us": 0, 00:17:54.983 "io_queue_requests": 0, 00:17:54.983 "delay_cmd_submit": true, 00:17:54.983 "transport_retry_count": 4, 00:17:54.983 "bdev_retry_count": 3, 00:17:54.983 "transport_ack_timeout": 0, 00:17:54.983 "ctrlr_loss_timeout_sec": 0, 00:17:54.983 "reconnect_delay_sec": 0, 00:17:54.983 "fast_io_fail_timeout_sec": 0, 00:17:54.983 "disable_auto_failback": false, 00:17:54.983 "generate_uuids": false, 00:17:54.983 "transport_tos": 0, 00:17:54.983 "nvme_error_stat": false, 00:17:54.983 "rdma_srq_size": 0, 00:17:54.983 "io_path_stat": false, 00:17:54.983 "allow_accel_sequence": false, 00:17:54.983 "rdma_max_cq_size": 0, 00:17:54.983 "rdma_cm_event_timeout_ms": 0, 00:17:54.983 "dhchap_digests": [ 00:17:54.983 "sha256", 00:17:54.983 "sha384", 00:17:54.983 "sha512" 00:17:54.983 ], 00:17:54.983 "dhchap_dhgroups": [ 00:17:54.983 "null", 00:17:54.983 "ffdhe2048", 00:17:54.983 "ffdhe3072", 00:17:54.983 "ffdhe4096", 00:17:54.983 "ffdhe6144", 00:17:54.983 "ffdhe8192" 00:17:54.983 ] 00:17:54.983 } 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "method": "bdev_nvme_set_hotplug", 00:17:54.983 "params": { 00:17:54.983 "period_us": 100000, 00:17:54.983 "enable": false 00:17:54.983 } 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "method": "bdev_malloc_create", 00:17:54.983 "params": { 00:17:54.983 "name": "malloc0", 00:17:54.983 "num_blocks": 8192, 00:17:54.983 "block_size": 4096, 00:17:54.983 "physical_block_size": 4096, 00:17:54.983 "uuid": "8fc03ff7-96ed-4358-b0b3-7d3565f1f0c4", 00:17:54.983 "optimal_io_boundary": 0, 00:17:54.983 "md_size": 0, 00:17:54.983 "dif_type": 0, 00:17:54.983 "dif_is_head_of_md": false, 00:17:54.983 "dif_pi_format": 0 00:17:54.983 } 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "method": "bdev_wait_for_examine" 00:17:54.983 } 00:17:54.983 ] 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "subsystem": "scsi", 00:17:54.983 "config": null 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "subsystem": "scheduler", 00:17:54.983 "config": [ 00:17:54.983 { 00:17:54.983 "method": "framework_set_scheduler", 00:17:54.983 "params": { 00:17:54.983 "name": "static" 00:17:54.983 } 00:17:54.983 } 00:17:54.983 ] 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "subsystem": "vhost_scsi", 00:17:54.983 "config": [] 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "subsystem": "vhost_blk", 00:17:54.983 "config": [] 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "subsystem": "ublk", 00:17:54.983 "config": [ 00:17:54.983 { 00:17:54.983 "method": "ublk_create_target", 00:17:54.983 "params": { 00:17:54.983 "cpumask": "1" 00:17:54.983 } 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "method": "ublk_start_disk", 00:17:54.983 "params": { 00:17:54.983 "bdev_name": "malloc0", 00:17:54.983 "ublk_id": 0, 00:17:54.983 "num_queues": 1, 00:17:54.983 "queue_depth": 128 00:17:54.983 } 00:17:54.983 } 00:17:54.983 ] 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "subsystem": "nbd", 00:17:54.983 "config": [] 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "subsystem": "nvmf", 00:17:54.983 "config": [ 00:17:54.983 { 00:17:54.983 "method": "nvmf_set_config", 00:17:54.983 "params": { 00:17:54.983 "discovery_filter": "match_any", 00:17:54.983 "admin_cmd_passthru": { 00:17:54.983 "identify_ctrlr": false 00:17:54.983 }, 00:17:54.983 "dhchap_digests": [ 00:17:54.983 "sha256", 00:17:54.983 "sha384", 00:17:54.983 "sha512" 00:17:54.983 ], 00:17:54.983 "dhchap_dhgroups": [ 00:17:54.983 "null", 00:17:54.983 "ffdhe2048", 00:17:54.983 "ffdhe3072", 00:17:54.983 "ffdhe4096", 00:17:54.983 "ffdhe6144", 00:17:54.983 "ffdhe8192" 00:17:54.983 ] 00:17:54.983 } 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "method": "nvmf_set_max_subsystems", 00:17:54.983 "params": { 00:17:54.983 "max_subsystems": 1024 00:17:54.983 } 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "method": "nvmf_set_crdt", 00:17:54.983 "params": { 00:17:54.983 "crdt1": 0, 00:17:54.983 "crdt2": 0, 00:17:54.983 "crdt3": 0 00:17:54.983 } 00:17:54.983 } 00:17:54.983 ] 00:17:54.983 }, 00:17:54.983 { 00:17:54.983 "subsystem": "iscsi", 00:17:54.983 "config": [ 00:17:54.983 { 00:17:54.983 "method": "iscsi_set_options", 00:17:54.983 "params": { 00:17:54.983 "node_base": "iqn.2016-06.io.spdk", 00:17:54.983 "max_sessions": 128, 00:17:54.983 "max_connections_per_session": 2, 00:17:54.983 "max_queue_depth": 64, 00:17:54.983 "default_time2wait": 2, 00:17:54.983 "default_time2retain": 20, 00:17:54.983 "first_burst_length": 8192, 00:17:54.983 "immediate_data": true, 00:17:54.983 "allow_duplicated_isid": false, 00:17:54.983 "error_recovery_level": 0, 00:17:54.983 "nop_timeout": 60, 00:17:54.983 "nop_in_interval": 30, 00:17:54.983 "disable_chap": false, 00:17:54.983 "require_chap": false, 00:17:54.983 "mutual_chap": false, 00:17:54.983 "chap_group": 0, 00:17:54.983 "max_large_datain_per_connection": 64, 00:17:54.983 "max_r2t_per_connection": 4, 00:17:54.983 "pdu_pool_size": 36864, 00:17:54.983 "immediate_data_pool_size": 16384, 00:17:54.983 "data_out_pool_size": 2048 00:17:54.983 } 00:17:54.983 } 00:17:54.983 ] 00:17:54.983 } 00:17:54.983 ] 00:17:54.983 }' 00:17:54.983 08:36:33 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.983 08:36:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:54.983 08:36:33 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:17:54.983 [2024-11-19 08:36:33.885817] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:54.983 [2024-11-19 08:36:33.885995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72872 ] 00:17:54.983 [2024-11-19 08:36:34.071977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.983 [2024-11-19 08:36:34.193854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.919 [2024-11-19 08:36:35.127631] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:55.919 [2024-11-19 08:36:35.128758] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:55.919 [2024-11-19 08:36:35.135778] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:17:55.919 [2024-11-19 08:36:35.135881] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:17:55.919 [2024-11-19 08:36:35.135899] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:55.919 [2024-11-19 08:36:35.135909] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:55.919 [2024-11-19 08:36:35.144706] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:55.919 [2024-11-19 08:36:35.144733] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:55.919 [2024-11-19 08:36:35.151657] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:55.919 [2024-11-19 08:36:35.151772] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:55.919 [2024-11-19 08:36:35.168646] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72872 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 72872 ']' 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 72872 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72872 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.178 killing process with pid 72872 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72872' 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 72872 00:17:56.178 08:36:35 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 72872 00:17:57.554 [2024-11-19 08:36:36.675583] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:57.554 [2024-11-19 08:36:36.718670] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:57.554 [2024-11-19 08:36:36.718846] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:57.554 [2024-11-19 08:36:36.726652] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:57.554 [2024-11-19 08:36:36.726718] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:57.554 [2024-11-19 08:36:36.726731] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:57.554 [2024-11-19 08:36:36.726764] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:57.554 [2024-11-19 08:36:36.726941] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:59.533 08:36:38 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:17:59.533 00:17:59.533 real 0m9.653s 00:17:59.533 user 0m7.469s 00:17:59.533 sys 0m3.213s 00:17:59.533 08:36:38 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.533 08:36:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:59.533 ************************************ 00:17:59.533 END TEST test_save_ublk_config 00:17:59.533 ************************************ 00:17:59.533 08:36:38 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72959 00:17:59.533 08:36:38 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:59.533 08:36:38 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.533 08:36:38 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72959 00:17:59.533 08:36:38 ublk -- common/autotest_common.sh@835 -- # '[' -z 72959 ']' 00:17:59.533 08:36:38 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.533 08:36:38 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.533 08:36:38 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.533 08:36:38 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.533 08:36:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:59.533 [2024-11-19 08:36:38.554558] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:17:59.533 [2024-11-19 08:36:38.554723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72959 ] 00:17:59.533 [2024-11-19 08:36:38.735316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:59.792 [2024-11-19 08:36:38.864275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.792 [2024-11-19 08:36:38.864287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.357 08:36:39 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.357 08:36:39 ublk -- common/autotest_common.sh@868 -- # return 0 00:18:00.357 08:36:39 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:00.357 08:36:39 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:00.357 08:36:39 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.357 08:36:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.357 ************************************ 00:18:00.357 START TEST test_create_ublk 00:18:00.357 ************************************ 00:18:00.357 08:36:39 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:18:00.357 08:36:39 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:00.357 08:36:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.357 08:36:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.357 [2024-11-19 08:36:39.645635] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:00.357 [2024-11-19 08:36:39.648059] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:00.357 08:36:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.357 08:36:39 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:00.616 08:36:39 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:00.616 08:36:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.616 08:36:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.616 08:36:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.616 08:36:39 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:00.616 08:36:39 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:00.616 08:36:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.616 08:36:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.616 [2024-11-19 08:36:39.893809] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:00.616 [2024-11-19 08:36:39.894286] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:00.616 [2024-11-19 08:36:39.894314] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:00.616 [2024-11-19 08:36:39.894326] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:00.616 [2024-11-19 08:36:39.902842] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:00.616 [2024-11-19 08:36:39.902874] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:00.873 [2024-11-19 08:36:39.909652] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:00.874 [2024-11-19 08:36:39.921707] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:00.874 [2024-11-19 08:36:39.935753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:00.874 08:36:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.874 08:36:39 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:00.874 08:36:39 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:00.874 08:36:39 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:00.874 08:36:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.874 08:36:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:00.874 08:36:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.874 08:36:39 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:00.874 { 00:18:00.874 "ublk_device": "/dev/ublkb0", 00:18:00.874 "id": 0, 00:18:00.874 "queue_depth": 512, 00:18:00.874 "num_queues": 4, 00:18:00.874 "bdev_name": "Malloc0" 00:18:00.874 } 00:18:00.874 ]' 00:18:00.874 08:36:39 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:00.874 08:36:39 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:00.874 08:36:39 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:00.874 08:36:40 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:00.874 08:36:40 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:00.874 08:36:40 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:00.874 08:36:40 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:01.132 08:36:40 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:01.132 08:36:40 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:01.132 08:36:40 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:01.132 08:36:40 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:01.132 08:36:40 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:01.132 08:36:40 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:01.132 08:36:40 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:01.132 08:36:40 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:01.132 08:36:40 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:01.132 08:36:40 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:01.132 08:36:40 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:01.132 08:36:40 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:01.132 08:36:40 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:01.132 08:36:40 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:01.132 08:36:40 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:01.132 fio: verification read phase will never start because write phase uses all of runtime 00:18:01.132 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:01.132 fio-3.35 00:18:01.132 Starting 1 process 00:18:13.330 00:18:13.330 fio_test: (groupid=0, jobs=1): err= 0: pid=73005: Tue Nov 19 08:36:50 2024 00:18:13.330 write: IOPS=9948, BW=38.9MiB/s (40.8MB/s)(389MiB/10001msec); 0 zone resets 00:18:13.330 clat (usec): min=64, max=9142, avg=98.92, stdev=172.02 00:18:13.330 lat (usec): min=65, max=9172, avg=99.75, stdev=172.05 00:18:13.330 clat percentiles (usec): 00:18:13.330 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 83], 00:18:13.330 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 87], 00:18:13.330 | 70.00th=[ 90], 80.00th=[ 95], 90.00th=[ 100], 95.00th=[ 108], 00:18:13.330 | 99.00th=[ 133], 99.50th=[ 169], 99.90th=[ 3294], 99.95th=[ 3818], 00:18:13.330 | 99.99th=[ 4146] 00:18:13.330 bw ( KiB/s): min=17136, max=42696, per=99.93%, avg=39767.58, stdev=5651.98, samples=19 00:18:13.330 iops : min= 4284, max=10674, avg=9942.00, stdev=1413.05, samples=19 00:18:13.330 lat (usec) : 100=90.02%, 250=9.54%, 500=0.02%, 750=0.02%, 1000=0.03% 00:18:13.331 lat (msec) : 2=0.12%, 4=0.24%, 10=0.03% 00:18:13.331 cpu : usr=3.41%, sys=7.41%, ctx=99502, majf=0, minf=796 00:18:13.331 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:13.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.331 issued rwts: total=0,99498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:13.331 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:13.331 00:18:13.331 Run status group 0 (all jobs): 00:18:13.331 WRITE: bw=38.9MiB/s (40.8MB/s), 38.9MiB/s-38.9MiB/s (40.8MB/s-40.8MB/s), io=389MiB (408MB), run=10001-10001msec 00:18:13.331 00:18:13.331 Disk stats (read/write): 00:18:13.331 ublkb0: ios=0/98437, merge=0/0, ticks=0/8916, in_queue=8916, util=99.09% 00:18:13.331 08:36:50 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.331 [2024-11-19 08:36:50.461386] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:13.331 [2024-11-19 08:36:50.504700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:13.331 [2024-11-19 08:36:50.505576] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:13.331 [2024-11-19 08:36:50.512679] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:13.331 [2024-11-19 08:36:50.513040] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:13.331 [2024-11-19 08:36:50.513061] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.331 08:36:50 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.331 [2024-11-19 08:36:50.528755] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:13.331 request: 00:18:13.331 { 00:18:13.331 "ublk_id": 0, 00:18:13.331 "method": "ublk_stop_disk", 00:18:13.331 "req_id": 1 00:18:13.331 } 00:18:13.331 Got JSON-RPC error response 00:18:13.331 response: 00:18:13.331 { 00:18:13.331 "code": -19, 00:18:13.331 "message": "No such device" 00:18:13.331 } 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.331 08:36:50 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.331 [2024-11-19 08:36:50.544773] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:13.331 [2024-11-19 08:36:50.552638] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:13.331 [2024-11-19 08:36:50.552707] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.331 08:36:50 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.331 08:36:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.331 08:36:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.331 08:36:51 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:13.331 08:36:51 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:13.331 08:36:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.331 08:36:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.331 08:36:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.331 08:36:51 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:13.331 08:36:51 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:13.331 08:36:51 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:13.331 08:36:51 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:13.331 08:36:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.331 08:36:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.331 08:36:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.331 08:36:51 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:13.331 08:36:51 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:13.331 ************************************ 00:18:13.331 END TEST test_create_ublk 00:18:13.331 ************************************ 00:18:13.331 08:36:51 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:13.331 00:18:13.331 real 0m11.670s 00:18:13.331 user 0m0.791s 00:18:13.331 sys 0m0.839s 00:18:13.331 08:36:51 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.331 08:36:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.331 08:36:51 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:13.331 08:36:51 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:13.331 08:36:51 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.331 08:36:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.331 ************************************ 00:18:13.331 START TEST test_create_multi_ublk 00:18:13.331 ************************************ 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.331 [2024-11-19 08:36:51.371646] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:13.331 [2024-11-19 08:36:51.374008] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.331 [2024-11-19 08:36:51.657842] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:13.331 [2024-11-19 08:36:51.658366] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:13.331 [2024-11-19 08:36:51.658393] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:13.331 [2024-11-19 08:36:51.658410] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:13.331 [2024-11-19 08:36:51.665668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:13.331 [2024-11-19 08:36:51.665710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:13.331 [2024-11-19 08:36:51.673661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:13.331 [2024-11-19 08:36:51.674439] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:13.331 [2024-11-19 08:36:51.685008] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.331 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.332 [2024-11-19 08:36:51.948831] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:13.332 [2024-11-19 08:36:51.949346] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:13.332 [2024-11-19 08:36:51.949367] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:13.332 [2024-11-19 08:36:51.949378] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:13.332 [2024-11-19 08:36:51.957911] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:13.332 [2024-11-19 08:36:51.961625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:13.332 [2024-11-19 08:36:51.971640] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:13.332 [2024-11-19 08:36:51.972560] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:13.332 [2024-11-19 08:36:51.980710] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.332 08:36:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.332 [2024-11-19 08:36:52.242748] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:13.332 [2024-11-19 08:36:52.243274] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:13.332 [2024-11-19 08:36:52.243292] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:13.332 [2024-11-19 08:36:52.243304] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:13.332 [2024-11-19 08:36:52.250692] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:13.332 [2024-11-19 08:36:52.250861] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:13.332 [2024-11-19 08:36:52.258660] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:13.332 [2024-11-19 08:36:52.259439] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:13.332 [2024-11-19 08:36:52.267696] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.332 [2024-11-19 08:36:52.528891] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:13.332 [2024-11-19 08:36:52.529436] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:13.332 [2024-11-19 08:36:52.529469] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:13.332 [2024-11-19 08:36:52.529481] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:13.332 [2024-11-19 08:36:52.537922] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:13.332 [2024-11-19 08:36:52.537957] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:13.332 [2024-11-19 08:36:52.544654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:13.332 [2024-11-19 08:36:52.545409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:13.332 [2024-11-19 08:36:52.553717] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:13.332 { 00:18:13.332 "ublk_device": "/dev/ublkb0", 00:18:13.332 "id": 0, 00:18:13.332 "queue_depth": 512, 00:18:13.332 "num_queues": 4, 00:18:13.332 "bdev_name": "Malloc0" 00:18:13.332 }, 00:18:13.332 { 00:18:13.332 "ublk_device": "/dev/ublkb1", 00:18:13.332 "id": 1, 00:18:13.332 "queue_depth": 512, 00:18:13.332 "num_queues": 4, 00:18:13.332 "bdev_name": "Malloc1" 00:18:13.332 }, 00:18:13.332 { 00:18:13.332 "ublk_device": "/dev/ublkb2", 00:18:13.332 "id": 2, 00:18:13.332 "queue_depth": 512, 00:18:13.332 "num_queues": 4, 00:18:13.332 "bdev_name": "Malloc2" 00:18:13.332 }, 00:18:13.332 { 00:18:13.332 "ublk_device": "/dev/ublkb3", 00:18:13.332 "id": 3, 00:18:13.332 "queue_depth": 512, 00:18:13.332 "num_queues": 4, 00:18:13.332 "bdev_name": "Malloc3" 00:18:13.332 } 00:18:13.332 ]' 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.332 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:13.590 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:13.590 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:13.590 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:13.590 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:13.590 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:13.590 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:13.590 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:13.590 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:13.590 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:13.590 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.590 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:13.848 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:13.848 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:13.849 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:13.849 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:13.849 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:13.849 08:36:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:13.849 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:13.849 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:13.849 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:13.849 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:13.849 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:14.107 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:14.107 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:14.107 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:14.107 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:14.107 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:14.107 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:14.107 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:14.107 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:14.107 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:14.107 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:14.107 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:14.107 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:14.107 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.365 [2024-11-19 08:36:53.581812] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:14.365 [2024-11-19 08:36:53.610116] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:14.365 [2024-11-19 08:36:53.611371] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:14.365 [2024-11-19 08:36:53.621705] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:14.365 [2024-11-19 08:36:53.622070] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:14.365 [2024-11-19 08:36:53.622095] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.365 08:36:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.365 [2024-11-19 08:36:53.636740] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:14.624 [2024-11-19 08:36:53.675700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:14.624 [2024-11-19 08:36:53.680744] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:14.624 [2024-11-19 08:36:53.687681] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:14.624 [2024-11-19 08:36:53.688061] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:14.624 [2024-11-19 08:36:53.688101] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:14.624 08:36:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.624 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:14.624 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:14.624 08:36:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.624 08:36:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.624 [2024-11-19 08:36:53.704789] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:14.624 [2024-11-19 08:36:53.742122] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:14.624 [2024-11-19 08:36:53.743284] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:14.624 [2024-11-19 08:36:53.752673] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:14.624 [2024-11-19 08:36:53.753019] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:14.624 [2024-11-19 08:36:53.753053] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:14.624 08:36:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.624 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:14.624 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:14.624 08:36:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.624 08:36:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:14.624 [2024-11-19 08:36:53.768849] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:14.624 [2024-11-19 08:36:53.808704] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:14.624 [2024-11-19 08:36:53.809722] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:14.624 [2024-11-19 08:36:53.817660] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:14.624 [2024-11-19 08:36:53.818114] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:14.624 [2024-11-19 08:36:53.818145] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:14.624 08:36:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.624 08:36:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:14.883 [2024-11-19 08:36:54.110785] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:14.884 [2024-11-19 08:36:54.118660] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:14.884 [2024-11-19 08:36:54.118725] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:14.884 08:36:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:14.884 08:36:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:14.884 08:36:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:14.884 08:36:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.884 08:36:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:15.818 08:36:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.818 08:36:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:15.818 08:36:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:15.818 08:36:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.818 08:36:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:15.818 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.818 08:36:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:15.818 08:36:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:15.818 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.818 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.386 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.386 08:36:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:16.386 08:36:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:16.386 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.386 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:16.644 ************************************ 00:18:16.644 END TEST test_create_multi_ublk 00:18:16.644 ************************************ 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:16.644 00:18:16.644 real 0m4.467s 00:18:16.644 user 0m1.305s 00:18:16.644 sys 0m0.157s 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.644 08:36:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:16.644 08:36:55 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:16.644 08:36:55 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:16.644 08:36:55 ublk -- ublk/ublk.sh@130 -- # killprocess 72959 00:18:16.644 08:36:55 ublk -- common/autotest_common.sh@954 -- # '[' -z 72959 ']' 00:18:16.644 08:36:55 ublk -- common/autotest_common.sh@958 -- # kill -0 72959 00:18:16.644 08:36:55 ublk -- common/autotest_common.sh@959 -- # uname 00:18:16.644 08:36:55 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.644 08:36:55 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72959 00:18:16.644 killing process with pid 72959 00:18:16.644 08:36:55 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:16.644 08:36:55 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:16.644 08:36:55 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72959' 00:18:16.644 08:36:55 ublk -- common/autotest_common.sh@973 -- # kill 72959 00:18:16.644 08:36:55 ublk -- common/autotest_common.sh@978 -- # wait 72959 00:18:17.576 [2024-11-19 08:36:56.868236] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:17.576 [2024-11-19 08:36:56.868300] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:19.015 00:18:19.015 real 0m29.422s 00:18:19.015 user 0m42.974s 00:18:19.015 sys 0m9.839s 00:18:19.015 08:36:57 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.015 ************************************ 00:18:19.015 END TEST ublk 00:18:19.015 08:36:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:19.015 ************************************ 00:18:19.015 08:36:58 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:19.015 08:36:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:19.015 08:36:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.015 08:36:58 -- common/autotest_common.sh@10 -- # set +x 00:18:19.015 ************************************ 00:18:19.015 START TEST ublk_recovery 00:18:19.015 ************************************ 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:19.016 * Looking for test storage... 00:18:19.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.016 08:36:58 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:19.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.016 --rc genhtml_branch_coverage=1 00:18:19.016 --rc genhtml_function_coverage=1 00:18:19.016 --rc genhtml_legend=1 00:18:19.016 --rc geninfo_all_blocks=1 00:18:19.016 --rc geninfo_unexecuted_blocks=1 00:18:19.016 00:18:19.016 ' 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:19.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.016 --rc genhtml_branch_coverage=1 00:18:19.016 --rc genhtml_function_coverage=1 00:18:19.016 --rc genhtml_legend=1 00:18:19.016 --rc geninfo_all_blocks=1 00:18:19.016 --rc geninfo_unexecuted_blocks=1 00:18:19.016 00:18:19.016 ' 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:19.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.016 --rc genhtml_branch_coverage=1 00:18:19.016 --rc genhtml_function_coverage=1 00:18:19.016 --rc genhtml_legend=1 00:18:19.016 --rc geninfo_all_blocks=1 00:18:19.016 --rc geninfo_unexecuted_blocks=1 00:18:19.016 00:18:19.016 ' 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:19.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.016 --rc genhtml_branch_coverage=1 00:18:19.016 --rc genhtml_function_coverage=1 00:18:19.016 --rc genhtml_legend=1 00:18:19.016 --rc geninfo_all_blocks=1 00:18:19.016 --rc geninfo_unexecuted_blocks=1 00:18:19.016 00:18:19.016 ' 00:18:19.016 08:36:58 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:19.016 08:36:58 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:19.016 08:36:58 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:19.016 08:36:58 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:19.016 08:36:58 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:19.016 08:36:58 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:19.016 08:36:58 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:19.016 08:36:58 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:19.016 08:36:58 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:19.016 08:36:58 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:19.016 08:36:58 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73377 00:18:19.016 08:36:58 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.016 08:36:58 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:19.016 08:36:58 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73377 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 73377 ']' 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.016 08:36:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.274 [2024-11-19 08:36:58.309460] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:19.274 [2024-11-19 08:36:58.309841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73377 ] 00:18:19.275 [2024-11-19 08:36:58.487746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:19.533 [2024-11-19 08:36:58.640704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.533 [2024-11-19 08:36:58.640705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.466 08:36:59 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.466 08:36:59 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:20.466 08:36:59 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:20.467 08:36:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.467 08:36:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.467 [2024-11-19 08:36:59.409634] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:20.467 [2024-11-19 08:36:59.412018] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:20.467 08:36:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.467 08:36:59 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:20.467 08:36:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.467 08:36:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.467 malloc0 00:18:20.467 08:36:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.467 08:36:59 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:20.467 08:36:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.467 08:36:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.467 [2024-11-19 08:36:59.545917] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:20.467 [2024-11-19 08:36:59.546080] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:20.467 [2024-11-19 08:36:59.546100] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:20.467 [2024-11-19 08:36:59.546114] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:20.467 [2024-11-19 08:36:59.553843] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:20.467 [2024-11-19 08:36:59.553887] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:20.467 [2024-11-19 08:36:59.560672] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:20.467 [2024-11-19 08:36:59.560866] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:20.467 [2024-11-19 08:36:59.576663] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:20.467 1 00:18:20.467 08:36:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.467 08:36:59 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:21.399 08:37:00 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73412 00:18:21.399 08:37:00 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:21.399 08:37:00 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:21.657 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:21.657 fio-3.35 00:18:21.657 Starting 1 process 00:18:26.927 08:37:05 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73377 00:18:26.927 08:37:05 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:32.296 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73377 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:32.296 08:37:10 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73523 00:18:32.296 08:37:10 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:32.296 08:37:10 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:32.296 08:37:10 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73523 00:18:32.296 08:37:10 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 73523 ']' 00:18:32.296 08:37:10 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.296 08:37:10 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.296 08:37:10 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.296 08:37:10 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.296 08:37:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:32.296 [2024-11-19 08:37:10.728337] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:18:32.296 [2024-11-19 08:37:10.728486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73523 ] 00:18:32.296 [2024-11-19 08:37:10.962663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:32.296 [2024-11-19 08:37:11.096832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.296 [2024-11-19 08:37:11.096844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.862 08:37:11 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.862 08:37:11 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:32.862 08:37:11 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:32.862 08:37:11 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.862 08:37:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:32.862 [2024-11-19 08:37:11.873693] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:32.862 [2024-11-19 08:37:11.876106] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:32.862 08:37:11 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.862 08:37:11 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:32.862 08:37:11 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.862 08:37:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:32.862 malloc0 00:18:32.862 08:37:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.862 08:37:12 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:32.862 08:37:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.862 08:37:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:32.862 [2024-11-19 08:37:12.008924] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:32.862 [2024-11-19 08:37:12.008976] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:32.862 [2024-11-19 08:37:12.009009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:32.862 [2024-11-19 08:37:12.016781] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:32.862 [2024-11-19 08:37:12.016807] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:32.862 1 00:18:32.862 08:37:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.862 08:37:12 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73412 00:18:33.800 [2024-11-19 08:37:13.017680] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:33.800 [2024-11-19 08:37:13.024658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:33.800 [2024-11-19 08:37:13.024686] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:34.739 [2024-11-19 08:37:14.024724] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:34.739 [2024-11-19 08:37:14.028637] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:34.739 [2024-11-19 08:37:14.028670] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:36.117 [2024-11-19 08:37:15.028701] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:36.117 [2024-11-19 08:37:15.036700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:36.117 [2024-11-19 08:37:15.036725] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:36.117 [2024-11-19 08:37:15.036740] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:36.117 [2024-11-19 08:37:15.036865] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:58.053 [2024-11-19 08:37:35.646709] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:58.053 [2024-11-19 08:37:35.654245] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:58.053 [2024-11-19 08:37:35.660949] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:58.053 [2024-11-19 08:37:35.660981] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:24.621 00:19:24.621 fio_test: (groupid=0, jobs=1): err= 0: pid=73415: Tue Nov 19 08:38:00 2024 00:19:24.621 read: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(2354MiB/60002msec) 00:19:24.621 slat (nsec): min=1801, max=1748.2k, avg=6333.38, stdev=3789.01 00:19:24.621 clat (usec): min=1308, max=30077k, avg=6066.47, stdev=297574.92 00:19:24.621 lat (usec): min=1315, max=30077k, avg=6072.80, stdev=297574.92 00:19:24.621 clat percentiles (usec): 00:19:24.621 | 1.00th=[ 2540], 5.00th=[ 2737], 10.00th=[ 2802], 20.00th=[ 2868], 00:19:24.621 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:19:24.621 | 70.00th=[ 3064], 80.00th=[ 3130], 90.00th=[ 3294], 95.00th=[ 4178], 00:19:24.621 | 99.00th=[ 6194], 99.50th=[ 6718], 99.90th=[ 8848], 99.95th=[12911], 00:19:24.621 | 99.99th=[13698] 00:19:24.621 bw ( KiB/s): min=14618, max=87064, per=100.00%, avg=79129.95, stdev=11403.53, samples=60 00:19:24.621 iops : min= 3654, max=21766, avg=19782.45, stdev=2850.92, samples=60 00:19:24.621 write: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(2351MiB/60002msec); 0 zone resets 00:19:24.621 slat (nsec): min=1984, max=523803, avg=6597.79, stdev=3176.57 00:19:24.621 clat (usec): min=1093, max=30078k, avg=6673.04, stdev=321988.24 00:19:24.621 lat (usec): min=1102, max=30078k, avg=6679.64, stdev=321988.24 00:19:24.621 clat percentiles (msec): 00:19:24.621 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:19:24.621 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:19:24.621 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:19:24.621 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 13], 00:19:24.621 | 99.99th=[17113] 00:19:24.621 bw ( KiB/s): min=13684, max=86944, per=100.00%, avg=79046.00, stdev=11433.95, samples=60 00:19:24.621 iops : min= 3421, max=21736, avg=19761.47, stdev=2858.48, samples=60 00:19:24.621 lat (msec) : 2=0.05%, 4=94.55%, 10=5.34%, 20=0.06%, >=2000=0.01% 00:19:24.621 cpu : usr=5.30%, sys=12.42%, ctx=40334, majf=0, minf=14 00:19:24.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:24.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.622 issued rwts: total=602523,601841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.622 00:19:24.622 Run status group 0 (all jobs): 00:19:24.622 READ: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=2354MiB (2468MB), run=60002-60002msec 00:19:24.622 WRITE: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=2351MiB (2465MB), run=60002-60002msec 00:19:24.622 00:19:24.622 Disk stats (read/write): 00:19:24.622 ublkb1: ios=600218/599586, merge=0/0, ticks=3595678/3889669, in_queue=7485348, util=99.93% 00:19:24.622 08:38:00 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.622 [2024-11-19 08:38:00.849712] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:24.622 [2024-11-19 08:38:00.896708] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:24.622 [2024-11-19 08:38:00.896917] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:24.622 [2024-11-19 08:38:00.905732] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:24.622 [2024-11-19 08:38:00.905909] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:24.622 [2024-11-19 08:38:00.909646] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.622 08:38:00 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.622 [2024-11-19 08:38:00.918824] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:24.622 [2024-11-19 08:38:00.926716] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:24.622 [2024-11-19 08:38:00.926782] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.622 08:38:00 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:24.622 08:38:00 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:24.622 08:38:00 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73523 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 73523 ']' 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 73523 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73523 00:19:24.622 killing process with pid 73523 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73523' 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@973 -- # kill 73523 00:19:24.622 08:38:00 ublk_recovery -- common/autotest_common.sh@978 -- # wait 73523 00:19:24.622 [2024-11-19 08:38:02.393716] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:24.622 [2024-11-19 08:38:02.393777] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:24.622 00:19:24.622 real 1m5.581s 00:19:24.622 user 1m50.369s 00:19:24.622 sys 0m20.716s 00:19:24.622 08:38:03 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.622 08:38:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.622 ************************************ 00:19:24.622 END TEST ublk_recovery 00:19:24.622 ************************************ 00:19:24.622 08:38:03 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:19:24.622 08:38:03 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:24.622 08:38:03 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:24.622 08:38:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.622 08:38:03 -- common/autotest_common.sh@10 -- # set +x 00:19:24.622 08:38:03 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:24.622 08:38:03 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:24.622 08:38:03 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:24.622 08:38:03 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:24.622 08:38:03 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:24.622 08:38:03 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:24.622 08:38:03 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:24.622 08:38:03 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:24.622 08:38:03 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:24.622 08:38:03 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:19:24.622 08:38:03 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:24.622 08:38:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:24.622 08:38:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.622 08:38:03 -- common/autotest_common.sh@10 -- # set +x 00:19:24.622 ************************************ 00:19:24.622 START TEST ftl 00:19:24.622 ************************************ 00:19:24.622 08:38:03 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:24.622 * Looking for test storage... 00:19:24.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:24.622 08:38:03 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:24.622 08:38:03 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:19:24.622 08:38:03 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:24.622 08:38:03 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:24.622 08:38:03 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.622 08:38:03 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.622 08:38:03 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.622 08:38:03 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.622 08:38:03 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.622 08:38:03 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.622 08:38:03 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.622 08:38:03 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.622 08:38:03 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.622 08:38:03 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.622 08:38:03 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.622 08:38:03 ftl -- scripts/common.sh@344 -- # case "$op" in 00:19:24.622 08:38:03 ftl -- scripts/common.sh@345 -- # : 1 00:19:24.622 08:38:03 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.622 08:38:03 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.622 08:38:03 ftl -- scripts/common.sh@365 -- # decimal 1 00:19:24.622 08:38:03 ftl -- scripts/common.sh@353 -- # local d=1 00:19:24.622 08:38:03 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.622 08:38:03 ftl -- scripts/common.sh@355 -- # echo 1 00:19:24.622 08:38:03 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.622 08:38:03 ftl -- scripts/common.sh@366 -- # decimal 2 00:19:24.622 08:38:03 ftl -- scripts/common.sh@353 -- # local d=2 00:19:24.622 08:38:03 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.622 08:38:03 ftl -- scripts/common.sh@355 -- # echo 2 00:19:24.622 08:38:03 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.622 08:38:03 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.622 08:38:03 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.622 08:38:03 ftl -- scripts/common.sh@368 -- # return 0 00:19:24.622 08:38:03 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.622 08:38:03 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:24.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.622 --rc genhtml_branch_coverage=1 00:19:24.622 --rc genhtml_function_coverage=1 00:19:24.622 --rc genhtml_legend=1 00:19:24.622 --rc geninfo_all_blocks=1 00:19:24.622 --rc geninfo_unexecuted_blocks=1 00:19:24.622 00:19:24.622 ' 00:19:24.622 08:38:03 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:24.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.622 --rc genhtml_branch_coverage=1 00:19:24.622 --rc genhtml_function_coverage=1 00:19:24.622 --rc genhtml_legend=1 00:19:24.622 --rc geninfo_all_blocks=1 00:19:24.622 --rc geninfo_unexecuted_blocks=1 00:19:24.622 00:19:24.622 ' 00:19:24.622 08:38:03 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:24.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.622 --rc genhtml_branch_coverage=1 00:19:24.622 --rc genhtml_function_coverage=1 00:19:24.622 --rc genhtml_legend=1 00:19:24.622 --rc geninfo_all_blocks=1 00:19:24.622 --rc geninfo_unexecuted_blocks=1 00:19:24.622 00:19:24.622 ' 00:19:24.622 08:38:03 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:24.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.622 --rc genhtml_branch_coverage=1 00:19:24.622 --rc genhtml_function_coverage=1 00:19:24.622 --rc genhtml_legend=1 00:19:24.622 --rc geninfo_all_blocks=1 00:19:24.622 --rc geninfo_unexecuted_blocks=1 00:19:24.622 00:19:24.622 ' 00:19:24.622 08:38:03 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:24.622 08:38:03 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:24.622 08:38:03 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:24.622 08:38:03 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:24.622 08:38:03 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:24.622 08:38:03 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:24.622 08:38:03 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.622 08:38:03 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:24.622 08:38:03 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:24.622 08:38:03 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:24.622 08:38:03 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:24.622 08:38:03 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:24.622 08:38:03 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:24.623 08:38:03 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:24.623 08:38:03 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:24.623 08:38:03 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:24.623 08:38:03 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:24.623 08:38:03 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:24.623 08:38:03 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:24.623 08:38:03 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:24.623 08:38:03 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:24.623 08:38:03 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:24.623 08:38:03 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:24.623 08:38:03 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:24.623 08:38:03 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:24.623 08:38:03 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:24.623 08:38:03 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:24.623 08:38:03 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.623 08:38:03 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:24.623 08:38:03 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.623 08:38:03 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:24.623 08:38:03 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:24.623 08:38:03 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:24.623 08:38:03 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:24.623 08:38:03 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:25.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:25.190 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:25.190 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:25.190 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:25.190 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:25.190 08:38:04 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74319 00:19:25.190 08:38:04 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:25.190 08:38:04 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74319 00:19:25.190 08:38:04 ftl -- common/autotest_common.sh@835 -- # '[' -z 74319 ']' 00:19:25.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.190 08:38:04 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.190 08:38:04 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.190 08:38:04 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.190 08:38:04 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.190 08:38:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:25.447 [2024-11-19 08:38:04.568898] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:19:25.447 [2024-11-19 08:38:04.569260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74319 ] 00:19:25.705 [2024-11-19 08:38:04.742770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.705 [2024-11-19 08:38:04.865738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.639 08:38:05 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.639 08:38:05 ftl -- common/autotest_common.sh@868 -- # return 0 00:19:26.639 08:38:05 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:26.639 08:38:05 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:28.015 08:38:06 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:28.015 08:38:06 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:28.273 08:38:07 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:28.273 08:38:07 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:28.273 08:38:07 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:28.532 08:38:07 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:28.532 08:38:07 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:28.532 08:38:07 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:28.532 08:38:07 ftl -- ftl/ftl.sh@50 -- # break 00:19:28.532 08:38:07 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:28.532 08:38:07 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:28.532 08:38:07 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:28.532 08:38:07 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:28.791 08:38:08 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:28.791 08:38:08 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:28.791 08:38:08 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:28.791 08:38:08 ftl -- ftl/ftl.sh@63 -- # break 00:19:28.791 08:38:08 ftl -- ftl/ftl.sh@66 -- # killprocess 74319 00:19:28.791 08:38:08 ftl -- common/autotest_common.sh@954 -- # '[' -z 74319 ']' 00:19:28.791 08:38:08 ftl -- common/autotest_common.sh@958 -- # kill -0 74319 00:19:28.791 08:38:08 ftl -- common/autotest_common.sh@959 -- # uname 00:19:28.791 08:38:08 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.791 08:38:08 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74319 00:19:29.050 killing process with pid 74319 00:19:29.050 08:38:08 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.050 08:38:08 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.050 08:38:08 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74319' 00:19:29.050 08:38:08 ftl -- common/autotest_common.sh@973 -- # kill 74319 00:19:29.050 08:38:08 ftl -- common/autotest_common.sh@978 -- # wait 74319 00:19:31.026 08:38:10 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:31.026 08:38:10 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:31.026 08:38:10 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:31.026 08:38:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.026 08:38:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:31.026 ************************************ 00:19:31.026 START TEST ftl_fio_basic 00:19:31.026 ************************************ 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:31.026 * Looking for test storage... 00:19:31.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:31.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.026 --rc genhtml_branch_coverage=1 00:19:31.026 --rc genhtml_function_coverage=1 00:19:31.026 --rc genhtml_legend=1 00:19:31.026 --rc geninfo_all_blocks=1 00:19:31.026 --rc geninfo_unexecuted_blocks=1 00:19:31.026 00:19:31.026 ' 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:31.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.026 --rc genhtml_branch_coverage=1 00:19:31.026 --rc genhtml_function_coverage=1 00:19:31.026 --rc genhtml_legend=1 00:19:31.026 --rc geninfo_all_blocks=1 00:19:31.026 --rc geninfo_unexecuted_blocks=1 00:19:31.026 00:19:31.026 ' 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:31.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.026 --rc genhtml_branch_coverage=1 00:19:31.026 --rc genhtml_function_coverage=1 00:19:31.026 --rc genhtml_legend=1 00:19:31.026 --rc geninfo_all_blocks=1 00:19:31.026 --rc geninfo_unexecuted_blocks=1 00:19:31.026 00:19:31.026 ' 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:31.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.026 --rc genhtml_branch_coverage=1 00:19:31.026 --rc genhtml_function_coverage=1 00:19:31.026 --rc genhtml_legend=1 00:19:31.026 --rc geninfo_all_blocks=1 00:19:31.026 --rc geninfo_unexecuted_blocks=1 00:19:31.026 00:19:31.026 ' 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:31.026 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:31.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74468 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74468 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 74468 ']' 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.285 08:38:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:31.285 [2024-11-19 08:38:10.464584] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:19:31.285 [2024-11-19 08:38:10.464944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74468 ] 00:19:31.544 [2024-11-19 08:38:10.654068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:31.544 [2024-11-19 08:38:10.785909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.544 [2024-11-19 08:38:10.786012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.544 [2024-11-19 08:38:10.786019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.479 08:38:11 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.479 08:38:11 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:19:32.479 08:38:11 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:32.479 08:38:11 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:32.479 08:38:11 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:32.479 08:38:11 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:32.479 08:38:11 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:32.479 08:38:11 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:32.738 08:38:11 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:32.738 08:38:11 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:32.738 08:38:11 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:32.738 08:38:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:32.738 08:38:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:32.738 08:38:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:32.738 08:38:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:32.738 08:38:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:32.996 08:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:32.996 { 00:19:32.996 "name": "nvme0n1", 00:19:32.996 "aliases": [ 00:19:32.996 "f73737a8-3143-4770-85a4-8f20d08ca39e" 00:19:32.996 ], 00:19:32.996 "product_name": "NVMe disk", 00:19:32.996 "block_size": 4096, 00:19:32.996 "num_blocks": 1310720, 00:19:32.996 "uuid": "f73737a8-3143-4770-85a4-8f20d08ca39e", 00:19:32.996 "numa_id": -1, 00:19:32.996 "assigned_rate_limits": { 00:19:32.996 "rw_ios_per_sec": 0, 00:19:32.996 "rw_mbytes_per_sec": 0, 00:19:32.996 "r_mbytes_per_sec": 0, 00:19:32.996 "w_mbytes_per_sec": 0 00:19:32.996 }, 00:19:32.996 "claimed": false, 00:19:32.996 "zoned": false, 00:19:32.996 "supported_io_types": { 00:19:32.996 "read": true, 00:19:32.996 "write": true, 00:19:32.996 "unmap": true, 00:19:32.996 "flush": true, 00:19:32.996 "reset": true, 00:19:32.996 "nvme_admin": true, 00:19:32.996 "nvme_io": true, 00:19:32.997 "nvme_io_md": false, 00:19:32.997 "write_zeroes": true, 00:19:32.997 "zcopy": false, 00:19:32.997 "get_zone_info": false, 00:19:32.997 "zone_management": false, 00:19:32.997 "zone_append": false, 00:19:32.997 "compare": true, 00:19:32.997 "compare_and_write": false, 00:19:32.997 "abort": true, 00:19:32.997 "seek_hole": false, 00:19:32.997 "seek_data": false, 00:19:32.997 "copy": true, 00:19:32.997 "nvme_iov_md": false 00:19:32.997 }, 00:19:32.997 "driver_specific": { 00:19:32.997 "nvme": [ 00:19:32.997 { 00:19:32.997 "pci_address": "0000:00:11.0", 00:19:32.997 "trid": { 00:19:32.997 "trtype": "PCIe", 00:19:32.997 "traddr": "0000:00:11.0" 00:19:32.997 }, 00:19:32.997 "ctrlr_data": { 00:19:32.997 "cntlid": 0, 00:19:32.997 "vendor_id": "0x1b36", 00:19:32.997 "model_number": "QEMU NVMe Ctrl", 00:19:32.997 "serial_number": "12341", 00:19:32.997 "firmware_revision": "8.0.0", 00:19:32.997 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:32.997 "oacs": { 00:19:32.997 "security": 0, 00:19:32.997 "format": 1, 00:19:32.997 "firmware": 0, 00:19:32.997 "ns_manage": 1 00:19:32.997 }, 00:19:32.997 "multi_ctrlr": false, 00:19:32.997 "ana_reporting": false 00:19:32.997 }, 00:19:32.997 "vs": { 00:19:32.997 "nvme_version": "1.4" 00:19:32.997 }, 00:19:32.997 "ns_data": { 00:19:32.997 "id": 1, 00:19:32.997 "can_share": false 00:19:32.997 } 00:19:32.997 } 00:19:32.997 ], 00:19:32.997 "mp_policy": "active_passive" 00:19:32.997 } 00:19:32.997 } 00:19:32.997 ]' 00:19:32.997 08:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:33.255 08:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:33.255 08:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:33.255 08:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:33.255 08:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:33.255 08:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:19:33.255 08:38:12 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:33.255 08:38:12 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:33.255 08:38:12 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:33.255 08:38:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:33.255 08:38:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:33.514 08:38:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:33.514 08:38:12 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:33.772 08:38:12 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=194d2908-08b9-493e-a6c4-e2f72b370ee1 00:19:33.772 08:38:12 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 194d2908-08b9-493e-a6c4-e2f72b370ee1 00:19:34.029 08:38:13 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=f28653a1-d11c-4f35-8f57-fb49a3006820 00:19:34.030 08:38:13 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f28653a1-d11c-4f35-8f57-fb49a3006820 00:19:34.030 08:38:13 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:34.030 08:38:13 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:34.030 08:38:13 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=f28653a1-d11c-4f35-8f57-fb49a3006820 00:19:34.030 08:38:13 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:34.030 08:38:13 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size f28653a1-d11c-4f35-8f57-fb49a3006820 00:19:34.030 08:38:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=f28653a1-d11c-4f35-8f57-fb49a3006820 00:19:34.030 08:38:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:34.030 08:38:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:34.030 08:38:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:34.030 08:38:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f28653a1-d11c-4f35-8f57-fb49a3006820 00:19:34.597 08:38:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:34.597 { 00:19:34.597 "name": "f28653a1-d11c-4f35-8f57-fb49a3006820", 00:19:34.597 "aliases": [ 00:19:34.597 "lvs/nvme0n1p0" 00:19:34.597 ], 00:19:34.597 "product_name": "Logical Volume", 00:19:34.597 "block_size": 4096, 00:19:34.597 "num_blocks": 26476544, 00:19:34.597 "uuid": "f28653a1-d11c-4f35-8f57-fb49a3006820", 00:19:34.597 "assigned_rate_limits": { 00:19:34.597 "rw_ios_per_sec": 0, 00:19:34.597 "rw_mbytes_per_sec": 0, 00:19:34.597 "r_mbytes_per_sec": 0, 00:19:34.597 "w_mbytes_per_sec": 0 00:19:34.597 }, 00:19:34.597 "claimed": false, 00:19:34.597 "zoned": false, 00:19:34.597 "supported_io_types": { 00:19:34.597 "read": true, 00:19:34.597 "write": true, 00:19:34.597 "unmap": true, 00:19:34.597 "flush": false, 00:19:34.597 "reset": true, 00:19:34.597 "nvme_admin": false, 00:19:34.597 "nvme_io": false, 00:19:34.597 "nvme_io_md": false, 00:19:34.597 "write_zeroes": true, 00:19:34.597 "zcopy": false, 00:19:34.597 "get_zone_info": false, 00:19:34.597 "zone_management": false, 00:19:34.597 "zone_append": false, 00:19:34.597 "compare": false, 00:19:34.597 "compare_and_write": false, 00:19:34.597 "abort": false, 00:19:34.597 "seek_hole": true, 00:19:34.597 "seek_data": true, 00:19:34.597 "copy": false, 00:19:34.597 "nvme_iov_md": false 00:19:34.597 }, 00:19:34.597 "driver_specific": { 00:19:34.597 "lvol": { 00:19:34.597 "lvol_store_uuid": "194d2908-08b9-493e-a6c4-e2f72b370ee1", 00:19:34.597 "base_bdev": "nvme0n1", 00:19:34.597 "thin_provision": true, 00:19:34.597 "num_allocated_clusters": 0, 00:19:34.597 "snapshot": false, 00:19:34.597 "clone": false, 00:19:34.597 "esnap_clone": false 00:19:34.597 } 00:19:34.597 } 00:19:34.597 } 00:19:34.597 ]' 00:19:34.597 08:38:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:34.597 08:38:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:34.597 08:38:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:34.597 08:38:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:34.597 08:38:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:34.597 08:38:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:34.597 08:38:13 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:34.597 08:38:13 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:34.597 08:38:13 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:34.856 08:38:14 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:34.856 08:38:14 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:34.856 08:38:14 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size f28653a1-d11c-4f35-8f57-fb49a3006820 00:19:34.856 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=f28653a1-d11c-4f35-8f57-fb49a3006820 00:19:34.856 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:34.856 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:34.856 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:34.856 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f28653a1-d11c-4f35-8f57-fb49a3006820 00:19:35.114 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:35.114 { 00:19:35.114 "name": "f28653a1-d11c-4f35-8f57-fb49a3006820", 00:19:35.114 "aliases": [ 00:19:35.114 "lvs/nvme0n1p0" 00:19:35.114 ], 00:19:35.114 "product_name": "Logical Volume", 00:19:35.114 "block_size": 4096, 00:19:35.114 "num_blocks": 26476544, 00:19:35.114 "uuid": "f28653a1-d11c-4f35-8f57-fb49a3006820", 00:19:35.114 "assigned_rate_limits": { 00:19:35.114 "rw_ios_per_sec": 0, 00:19:35.114 "rw_mbytes_per_sec": 0, 00:19:35.114 "r_mbytes_per_sec": 0, 00:19:35.114 "w_mbytes_per_sec": 0 00:19:35.114 }, 00:19:35.114 "claimed": false, 00:19:35.114 "zoned": false, 00:19:35.114 "supported_io_types": { 00:19:35.114 "read": true, 00:19:35.114 "write": true, 00:19:35.114 "unmap": true, 00:19:35.114 "flush": false, 00:19:35.114 "reset": true, 00:19:35.114 "nvme_admin": false, 00:19:35.114 "nvme_io": false, 00:19:35.114 "nvme_io_md": false, 00:19:35.114 "write_zeroes": true, 00:19:35.114 "zcopy": false, 00:19:35.114 "get_zone_info": false, 00:19:35.114 "zone_management": false, 00:19:35.114 "zone_append": false, 00:19:35.114 "compare": false, 00:19:35.114 "compare_and_write": false, 00:19:35.114 "abort": false, 00:19:35.114 "seek_hole": true, 00:19:35.114 "seek_data": true, 00:19:35.114 "copy": false, 00:19:35.114 "nvme_iov_md": false 00:19:35.114 }, 00:19:35.114 "driver_specific": { 00:19:35.114 "lvol": { 00:19:35.114 "lvol_store_uuid": "194d2908-08b9-493e-a6c4-e2f72b370ee1", 00:19:35.114 "base_bdev": "nvme0n1", 00:19:35.114 "thin_provision": true, 00:19:35.114 "num_allocated_clusters": 0, 00:19:35.114 "snapshot": false, 00:19:35.114 "clone": false, 00:19:35.114 "esnap_clone": false 00:19:35.114 } 00:19:35.114 } 00:19:35.114 } 00:19:35.114 ]' 00:19:35.114 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:35.373 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:35.373 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:35.373 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:35.373 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:35.373 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:35.373 08:38:14 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:35.373 08:38:14 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:35.631 08:38:14 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:35.631 08:38:14 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:35.631 08:38:14 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:35.631 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:35.631 08:38:14 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size f28653a1-d11c-4f35-8f57-fb49a3006820 00:19:35.631 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=f28653a1-d11c-4f35-8f57-fb49a3006820 00:19:35.631 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:35.631 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:35.631 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:35.631 08:38:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f28653a1-d11c-4f35-8f57-fb49a3006820 00:19:35.889 08:38:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:35.889 { 00:19:35.889 "name": "f28653a1-d11c-4f35-8f57-fb49a3006820", 00:19:35.889 "aliases": [ 00:19:35.889 "lvs/nvme0n1p0" 00:19:35.889 ], 00:19:35.889 "product_name": "Logical Volume", 00:19:35.889 "block_size": 4096, 00:19:35.889 "num_blocks": 26476544, 00:19:35.889 "uuid": "f28653a1-d11c-4f35-8f57-fb49a3006820", 00:19:35.889 "assigned_rate_limits": { 00:19:35.889 "rw_ios_per_sec": 0, 00:19:35.889 "rw_mbytes_per_sec": 0, 00:19:35.889 "r_mbytes_per_sec": 0, 00:19:35.889 "w_mbytes_per_sec": 0 00:19:35.889 }, 00:19:35.889 "claimed": false, 00:19:35.889 "zoned": false, 00:19:35.889 "supported_io_types": { 00:19:35.889 "read": true, 00:19:35.889 "write": true, 00:19:35.889 "unmap": true, 00:19:35.889 "flush": false, 00:19:35.889 "reset": true, 00:19:35.889 "nvme_admin": false, 00:19:35.889 "nvme_io": false, 00:19:35.889 "nvme_io_md": false, 00:19:35.889 "write_zeroes": true, 00:19:35.889 "zcopy": false, 00:19:35.889 "get_zone_info": false, 00:19:35.889 "zone_management": false, 00:19:35.889 "zone_append": false, 00:19:35.889 "compare": false, 00:19:35.889 "compare_and_write": false, 00:19:35.889 "abort": false, 00:19:35.889 "seek_hole": true, 00:19:35.889 "seek_data": true, 00:19:35.889 "copy": false, 00:19:35.889 "nvme_iov_md": false 00:19:35.889 }, 00:19:35.889 "driver_specific": { 00:19:35.889 "lvol": { 00:19:35.889 "lvol_store_uuid": "194d2908-08b9-493e-a6c4-e2f72b370ee1", 00:19:35.889 "base_bdev": "nvme0n1", 00:19:35.889 "thin_provision": true, 00:19:35.889 "num_allocated_clusters": 0, 00:19:35.889 "snapshot": false, 00:19:35.889 "clone": false, 00:19:35.889 "esnap_clone": false 00:19:35.889 } 00:19:35.889 } 00:19:35.889 } 00:19:35.889 ]' 00:19:35.889 08:38:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:35.889 08:38:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:35.889 08:38:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:35.890 08:38:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:35.890 08:38:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:35.890 08:38:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:35.890 08:38:15 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:35.890 08:38:15 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:35.890 08:38:15 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f28653a1-d11c-4f35-8f57-fb49a3006820 -c nvc0n1p0 --l2p_dram_limit 60 00:19:36.149 [2024-11-19 08:38:15.396558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.149 [2024-11-19 08:38:15.396847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:36.149 [2024-11-19 08:38:15.396897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:36.149 [2024-11-19 08:38:15.396912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.149 [2024-11-19 08:38:15.397012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.149 [2024-11-19 08:38:15.397033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.149 [2024-11-19 08:38:15.397048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:36.149 [2024-11-19 08:38:15.397060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.149 [2024-11-19 08:38:15.397122] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:36.149 [2024-11-19 08:38:15.398126] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:36.149 [2024-11-19 08:38:15.398166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.149 [2024-11-19 08:38:15.398181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.149 [2024-11-19 08:38:15.398196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:19:36.149 [2024-11-19 08:38:15.398208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.149 [2024-11-19 08:38:15.398322] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7154c13a-d809-467f-b810-bcb0cdb3468a 00:19:36.149 [2024-11-19 08:38:15.399525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.149 [2024-11-19 08:38:15.399566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:36.149 [2024-11-19 08:38:15.399582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:36.149 [2024-11-19 08:38:15.399596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.149 [2024-11-19 08:38:15.405005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.149 [2024-11-19 08:38:15.405300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.149 [2024-11-19 08:38:15.405430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.294 ms 00:19:36.149 [2024-11-19 08:38:15.405498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.149 [2024-11-19 08:38:15.405932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.149 [2024-11-19 08:38:15.406074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.149 [2024-11-19 08:38:15.406204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:19:36.149 [2024-11-19 08:38:15.406272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.149 [2024-11-19 08:38:15.406489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.149 [2024-11-19 08:38:15.406676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:36.149 [2024-11-19 08:38:15.406806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:36.149 [2024-11-19 08:38:15.406870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.149 [2024-11-19 08:38:15.407004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.149 [2024-11-19 08:38:15.411818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.149 [2024-11-19 08:38:15.412069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.149 [2024-11-19 08:38:15.412203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.821 ms 00:19:36.149 [2024-11-19 08:38:15.412267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.149 [2024-11-19 08:38:15.412464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.149 [2024-11-19 08:38:15.412524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:36.149 [2024-11-19 08:38:15.412579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:36.149 [2024-11-19 08:38:15.412712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.149 [2024-11-19 08:38:15.412867] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:36.149 [2024-11-19 08:38:15.413156] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:36.149 [2024-11-19 08:38:15.413327] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:36.149 [2024-11-19 08:38:15.413472] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:36.149 [2024-11-19 08:38:15.413619] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:36.149 [2024-11-19 08:38:15.413762] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:36.149 [2024-11-19 08:38:15.413965] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:36.149 [2024-11-19 08:38:15.414078] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:36.149 [2024-11-19 08:38:15.414222] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:36.149 [2024-11-19 08:38:15.414281] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:36.149 [2024-11-19 08:38:15.414346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.149 [2024-11-19 08:38:15.414453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:36.149 [2024-11-19 08:38:15.414575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.483 ms 00:19:36.150 [2024-11-19 08:38:15.414724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.150 [2024-11-19 08:38:15.414943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.150 [2024-11-19 08:38:15.415078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:36.150 [2024-11-19 08:38:15.415207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:36.150 [2024-11-19 08:38:15.415318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.150 [2024-11-19 08:38:15.415526] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:36.150 [2024-11-19 08:38:15.415670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:36.150 [2024-11-19 08:38:15.415802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.150 [2024-11-19 08:38:15.415926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.150 [2024-11-19 08:38:15.415956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:36.150 [2024-11-19 08:38:15.415978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:36.150 [2024-11-19 08:38:15.415992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:36.150 [2024-11-19 08:38:15.416004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:36.150 [2024-11-19 08:38:15.416017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:36.150 [2024-11-19 08:38:15.416028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.150 [2024-11-19 08:38:15.416041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:36.150 [2024-11-19 08:38:15.416052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:36.150 [2024-11-19 08:38:15.416064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.150 [2024-11-19 08:38:15.416076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:36.150 [2024-11-19 08:38:15.416089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:36.150 [2024-11-19 08:38:15.416099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.150 [2024-11-19 08:38:15.416117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:36.150 [2024-11-19 08:38:15.416128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:36.150 [2024-11-19 08:38:15.416141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.150 [2024-11-19 08:38:15.416152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:36.150 [2024-11-19 08:38:15.416164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:36.150 [2024-11-19 08:38:15.416175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.150 [2024-11-19 08:38:15.416188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:36.150 [2024-11-19 08:38:15.416199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:36.150 [2024-11-19 08:38:15.416211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.150 [2024-11-19 08:38:15.416221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:36.150 [2024-11-19 08:38:15.416234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:36.150 [2024-11-19 08:38:15.416244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.150 [2024-11-19 08:38:15.416257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:36.150 [2024-11-19 08:38:15.416268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:36.150 [2024-11-19 08:38:15.416280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.150 [2024-11-19 08:38:15.416291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:36.150 [2024-11-19 08:38:15.416306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:36.150 [2024-11-19 08:38:15.416316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.150 [2024-11-19 08:38:15.416329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:36.150 [2024-11-19 08:38:15.416359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:36.150 [2024-11-19 08:38:15.416373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.150 [2024-11-19 08:38:15.416385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:36.150 [2024-11-19 08:38:15.416398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:36.150 [2024-11-19 08:38:15.416409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.150 [2024-11-19 08:38:15.416422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:36.150 [2024-11-19 08:38:15.416433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:36.150 [2024-11-19 08:38:15.416451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.150 [2024-11-19 08:38:15.416462] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:36.150 [2024-11-19 08:38:15.416476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:36.150 [2024-11-19 08:38:15.416488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.150 [2024-11-19 08:38:15.416501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.150 [2024-11-19 08:38:15.416513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:36.150 [2024-11-19 08:38:15.416528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:36.150 [2024-11-19 08:38:15.416539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:36.150 [2024-11-19 08:38:15.416552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:36.150 [2024-11-19 08:38:15.416563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:36.150 [2024-11-19 08:38:15.416576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:36.150 [2024-11-19 08:38:15.416593] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:36.150 [2024-11-19 08:38:15.416627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.150 [2024-11-19 08:38:15.416644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:36.150 [2024-11-19 08:38:15.416657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:36.150 [2024-11-19 08:38:15.416669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:36.150 [2024-11-19 08:38:15.416682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:36.150 [2024-11-19 08:38:15.416694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:36.150 [2024-11-19 08:38:15.416707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:36.150 [2024-11-19 08:38:15.416719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:36.150 [2024-11-19 08:38:15.416733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:36.150 [2024-11-19 08:38:15.416744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:36.150 [2024-11-19 08:38:15.416760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:36.150 [2024-11-19 08:38:15.416771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:36.150 [2024-11-19 08:38:15.416787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:36.150 [2024-11-19 08:38:15.416799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:36.150 [2024-11-19 08:38:15.416813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:36.150 [2024-11-19 08:38:15.416824] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:36.150 [2024-11-19 08:38:15.416845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.150 [2024-11-19 08:38:15.416862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:36.150 [2024-11-19 08:38:15.416876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:36.150 [2024-11-19 08:38:15.416887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:36.150 [2024-11-19 08:38:15.416903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:36.150 [2024-11-19 08:38:15.416917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.150 [2024-11-19 08:38:15.416931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:36.150 [2024-11-19 08:38:15.416944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:19:36.150 [2024-11-19 08:38:15.416957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.150 [2024-11-19 08:38:15.417065] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:36.150 [2024-11-19 08:38:15.417097] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:40.336 [2024-11-19 08:38:19.193415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-11-19 08:38:19.193489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:40.336 [2024-11-19 08:38:19.193515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3776.373 ms 00:19:40.336 [2024-11-19 08:38:19.193530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-11-19 08:38:19.225815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-11-19 08:38:19.225883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.336 [2024-11-19 08:38:19.225904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.985 ms 00:19:40.336 [2024-11-19 08:38:19.225919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-11-19 08:38:19.226104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-11-19 08:38:19.226128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:40.336 [2024-11-19 08:38:19.226142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:40.336 [2024-11-19 08:38:19.226159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-11-19 08:38:19.277093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-11-19 08:38:19.277545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.336 [2024-11-19 08:38:19.277739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.869 ms 00:19:40.336 [2024-11-19 08:38:19.277856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-11-19 08:38:19.278007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.336 [2024-11-19 08:38:19.278264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.337 [2024-11-19 08:38:19.278402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:40.337 [2024-11-19 08:38:19.278528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.337 [2024-11-19 08:38:19.279249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.337 [2024-11-19 08:38:19.279381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.337 [2024-11-19 08:38:19.279499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:19:40.337 [2024-11-19 08:38:19.279640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.337 [2024-11-19 08:38:19.279949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.337 [2024-11-19 08:38:19.280060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.337 [2024-11-19 08:38:19.280155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:19:40.337 [2024-11-19 08:38:19.280186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.337 [2024-11-19 08:38:19.299334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.337 [2024-11-19 08:38:19.299412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.337 [2024-11-19 08:38:19.299446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.102 ms 00:19:40.337 [2024-11-19 08:38:19.299464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.337 [2024-11-19 08:38:19.313203] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:40.337 [2024-11-19 08:38:19.328068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.337 [2024-11-19 08:38:19.328171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:40.337 [2024-11-19 08:38:19.328199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.395 ms 00:19:40.337 [2024-11-19 08:38:19.328216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.337 [2024-11-19 08:38:19.387962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.337 [2024-11-19 08:38:19.388061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:40.337 [2024-11-19 08:38:19.388092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.660 ms 00:19:40.337 [2024-11-19 08:38:19.388105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.337 [2024-11-19 08:38:19.388383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.337 [2024-11-19 08:38:19.388404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:40.337 [2024-11-19 08:38:19.388423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:19:40.337 [2024-11-19 08:38:19.388435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.337 [2024-11-19 08:38:19.421667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.337 [2024-11-19 08:38:19.421758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:40.337 [2024-11-19 08:38:19.421782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.080 ms 00:19:40.337 [2024-11-19 08:38:19.421796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.337 [2024-11-19 08:38:19.453160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.337 [2024-11-19 08:38:19.453222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:40.337 [2024-11-19 08:38:19.453255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.274 ms 00:19:40.337 [2024-11-19 08:38:19.453268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.337 [2024-11-19 08:38:19.454037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.337 [2024-11-19 08:38:19.454070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:40.337 [2024-11-19 08:38:19.454088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:19:40.337 [2024-11-19 08:38:19.454100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.337 [2024-11-19 08:38:19.536179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.337 [2024-11-19 08:38:19.536244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:40.337 [2024-11-19 08:38:19.536271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.976 ms 00:19:40.337 [2024-11-19 08:38:19.536287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.337 [2024-11-19 08:38:19.568667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.337 [2024-11-19 08:38:19.568723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:40.337 [2024-11-19 08:38:19.568747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.252 ms 00:19:40.337 [2024-11-19 08:38:19.568759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.337 [2024-11-19 08:38:19.601934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.337 [2024-11-19 08:38:19.602006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:40.337 [2024-11-19 08:38:19.602030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.109 ms 00:19:40.337 [2024-11-19 08:38:19.602042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-11-19 08:38:19.635766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.596 [2024-11-19 08:38:19.635848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:40.596 [2024-11-19 08:38:19.635873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.616 ms 00:19:40.596 [2024-11-19 08:38:19.635886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-11-19 08:38:19.635986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.596 [2024-11-19 08:38:19.636005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:40.596 [2024-11-19 08:38:19.636025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:40.596 [2024-11-19 08:38:19.636040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-11-19 08:38:19.636262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.596 [2024-11-19 08:38:19.636286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:40.596 [2024-11-19 08:38:19.636307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:40.596 [2024-11-19 08:38:19.636320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.596 [2024-11-19 08:38:19.637634] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4240.541 ms, result 0 00:19:40.596 { 00:19:40.596 "name": "ftl0", 00:19:40.596 "uuid": "7154c13a-d809-467f-b810-bcb0cdb3468a" 00:19:40.596 } 00:19:40.596 08:38:19 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:19:40.596 08:38:19 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:40.596 08:38:19 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:40.596 08:38:19 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:19:40.596 08:38:19 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:40.596 08:38:19 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:40.596 08:38:19 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:40.855 08:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:41.115 [ 00:19:41.115 { 00:19:41.115 "name": "ftl0", 00:19:41.115 "aliases": [ 00:19:41.115 "7154c13a-d809-467f-b810-bcb0cdb3468a" 00:19:41.115 ], 00:19:41.115 "product_name": "FTL disk", 00:19:41.115 "block_size": 4096, 00:19:41.115 "num_blocks": 20971520, 00:19:41.115 "uuid": "7154c13a-d809-467f-b810-bcb0cdb3468a", 00:19:41.115 "assigned_rate_limits": { 00:19:41.115 "rw_ios_per_sec": 0, 00:19:41.115 "rw_mbytes_per_sec": 0, 00:19:41.115 "r_mbytes_per_sec": 0, 00:19:41.115 "w_mbytes_per_sec": 0 00:19:41.115 }, 00:19:41.115 "claimed": false, 00:19:41.115 "zoned": false, 00:19:41.115 "supported_io_types": { 00:19:41.115 "read": true, 00:19:41.115 "write": true, 00:19:41.115 "unmap": true, 00:19:41.115 "flush": true, 00:19:41.115 "reset": false, 00:19:41.115 "nvme_admin": false, 00:19:41.115 "nvme_io": false, 00:19:41.115 "nvme_io_md": false, 00:19:41.115 "write_zeroes": true, 00:19:41.115 "zcopy": false, 00:19:41.115 "get_zone_info": false, 00:19:41.115 "zone_management": false, 00:19:41.115 "zone_append": false, 00:19:41.115 "compare": false, 00:19:41.115 "compare_and_write": false, 00:19:41.115 "abort": false, 00:19:41.115 "seek_hole": false, 00:19:41.115 "seek_data": false, 00:19:41.115 "copy": false, 00:19:41.116 "nvme_iov_md": false 00:19:41.116 }, 00:19:41.116 "driver_specific": { 00:19:41.116 "ftl": { 00:19:41.116 "base_bdev": "f28653a1-d11c-4f35-8f57-fb49a3006820", 00:19:41.116 "cache": "nvc0n1p0" 00:19:41.116 } 00:19:41.116 } 00:19:41.116 } 00:19:41.116 ] 00:19:41.116 08:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:19:41.116 08:38:20 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:19:41.116 08:38:20 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:41.378 08:38:20 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:19:41.378 08:38:20 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:41.641 [2024-11-19 08:38:20.863328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.641 [2024-11-19 08:38:20.863635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:41.641 [2024-11-19 08:38:20.863670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:41.641 [2024-11-19 08:38:20.863687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.641 [2024-11-19 08:38:20.863752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:41.641 [2024-11-19 08:38:20.867224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.641 [2024-11-19 08:38:20.867260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:41.641 [2024-11-19 08:38:20.867296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.441 ms 00:19:41.641 [2024-11-19 08:38:20.867308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.641 [2024-11-19 08:38:20.867892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.641 [2024-11-19 08:38:20.867919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:41.641 [2024-11-19 08:38:20.867937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:19:41.641 [2024-11-19 08:38:20.867949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.641 [2024-11-19 08:38:20.871259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.641 [2024-11-19 08:38:20.871296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:41.641 [2024-11-19 08:38:20.871314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.274 ms 00:19:41.641 [2024-11-19 08:38:20.871326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.641 [2024-11-19 08:38:20.878104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.641 [2024-11-19 08:38:20.878376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:41.641 [2024-11-19 08:38:20.878417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.731 ms 00:19:41.641 [2024-11-19 08:38:20.878431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.641 [2024-11-19 08:38:20.911291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.641 [2024-11-19 08:38:20.911377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:41.641 [2024-11-19 08:38:20.911403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.676 ms 00:19:41.641 [2024-11-19 08:38:20.911415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.904 [2024-11-19 08:38:20.931113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.904 [2024-11-19 08:38:20.931187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:41.904 [2024-11-19 08:38:20.931213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.571 ms 00:19:41.904 [2024-11-19 08:38:20.931229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.904 [2024-11-19 08:38:20.931527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.904 [2024-11-19 08:38:20.931556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:41.904 [2024-11-19 08:38:20.931573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:19:41.904 [2024-11-19 08:38:20.931586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.904 [2024-11-19 08:38:20.963284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.904 [2024-11-19 08:38:20.963331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:41.904 [2024-11-19 08:38:20.963353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.632 ms 00:19:41.904 [2024-11-19 08:38:20.963366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.904 [2024-11-19 08:38:20.994572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.904 [2024-11-19 08:38:20.994643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:41.904 [2024-11-19 08:38:20.994667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.147 ms 00:19:41.904 [2024-11-19 08:38:20.994680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.904 [2024-11-19 08:38:21.025442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.904 [2024-11-19 08:38:21.025645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:41.904 [2024-11-19 08:38:21.025679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.700 ms 00:19:41.904 [2024-11-19 08:38:21.025693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.904 [2024-11-19 08:38:21.056542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.904 [2024-11-19 08:38:21.056588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:41.904 [2024-11-19 08:38:21.056623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.673 ms 00:19:41.904 [2024-11-19 08:38:21.056639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.905 [2024-11-19 08:38:21.056696] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:41.905 [2024-11-19 08:38:21.056728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.056998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:41.905 [2024-11-19 08:38:21.057875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.057913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.057925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.057939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.057951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.057967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.057980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.057993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.058005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.058018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.058030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.058044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.058056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.058070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.058085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.058102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.058114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.058130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:41.906 [2024-11-19 08:38:21.058151] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:41.906 [2024-11-19 08:38:21.058165] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7154c13a-d809-467f-b810-bcb0cdb3468a 00:19:41.906 [2024-11-19 08:38:21.058177] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:41.906 [2024-11-19 08:38:21.058192] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:41.906 [2024-11-19 08:38:21.058203] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:41.906 [2024-11-19 08:38:21.058220] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:41.906 [2024-11-19 08:38:21.058231] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:41.906 [2024-11-19 08:38:21.058244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:41.906 [2024-11-19 08:38:21.058255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:41.906 [2024-11-19 08:38:21.058267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:41.906 [2024-11-19 08:38:21.058277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:41.906 [2024-11-19 08:38:21.058291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.906 [2024-11-19 08:38:21.058303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:41.906 [2024-11-19 08:38:21.058318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:19:41.906 [2024-11-19 08:38:21.058329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.906 [2024-11-19 08:38:21.075113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.906 [2024-11-19 08:38:21.075167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:41.906 [2024-11-19 08:38:21.075188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.704 ms 00:19:41.906 [2024-11-19 08:38:21.075201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.906 [2024-11-19 08:38:21.075675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.906 [2024-11-19 08:38:21.075698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:41.906 [2024-11-19 08:38:21.075714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:19:41.906 [2024-11-19 08:38:21.075725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.906 [2024-11-19 08:38:21.133931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.906 [2024-11-19 08:38:21.133994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.906 [2024-11-19 08:38:21.134017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.906 [2024-11-19 08:38:21.134030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.906 [2024-11-19 08:38:21.134119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.906 [2024-11-19 08:38:21.134135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.906 [2024-11-19 08:38:21.134151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.906 [2024-11-19 08:38:21.134163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.906 [2024-11-19 08:38:21.134326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.906 [2024-11-19 08:38:21.134348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.906 [2024-11-19 08:38:21.134367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.906 [2024-11-19 08:38:21.134379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.906 [2024-11-19 08:38:21.134417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:41.906 [2024-11-19 08:38:21.134438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.906 [2024-11-19 08:38:21.134453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:41.906 [2024-11-19 08:38:21.134465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.168 [2024-11-19 08:38:21.244237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.168 [2024-11-19 08:38:21.244446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:42.168 [2024-11-19 08:38:21.244483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.168 [2024-11-19 08:38:21.244496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.168 [2024-11-19 08:38:21.328794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.168 [2024-11-19 08:38:21.328863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:42.168 [2024-11-19 08:38:21.328887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.168 [2024-11-19 08:38:21.328900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.168 [2024-11-19 08:38:21.329037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.168 [2024-11-19 08:38:21.329057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:42.168 [2024-11-19 08:38:21.329072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.168 [2024-11-19 08:38:21.329087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.168 [2024-11-19 08:38:21.329175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.168 [2024-11-19 08:38:21.329193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:42.168 [2024-11-19 08:38:21.329209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.168 [2024-11-19 08:38:21.329221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.168 [2024-11-19 08:38:21.329366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.168 [2024-11-19 08:38:21.329387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:42.168 [2024-11-19 08:38:21.329402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.168 [2024-11-19 08:38:21.329414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.168 [2024-11-19 08:38:21.329500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.168 [2024-11-19 08:38:21.329519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:42.168 [2024-11-19 08:38:21.329534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.168 [2024-11-19 08:38:21.329546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.168 [2024-11-19 08:38:21.329631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.168 [2024-11-19 08:38:21.329649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:42.168 [2024-11-19 08:38:21.329665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.168 [2024-11-19 08:38:21.329677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.168 [2024-11-19 08:38:21.329750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:42.168 [2024-11-19 08:38:21.329768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:42.168 [2024-11-19 08:38:21.329783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:42.168 [2024-11-19 08:38:21.329794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.168 [2024-11-19 08:38:21.329987] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 466.651 ms, result 0 00:19:42.168 true 00:19:42.168 08:38:21 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74468 00:19:42.168 08:38:21 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 74468 ']' 00:19:42.168 08:38:21 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 74468 00:19:42.168 08:38:21 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:19:42.168 08:38:21 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.169 08:38:21 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74468 00:19:42.169 killing process with pid 74468 00:19:42.169 08:38:21 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:42.169 08:38:21 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:42.169 08:38:21 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74468' 00:19:42.169 08:38:21 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 74468 00:19:42.169 08:38:21 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 74468 00:19:47.442 08:38:25 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:19:47.442 08:38:25 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:47.442 08:38:25 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:47.443 08:38:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:19:47.443 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:19:47.443 fio-3.35 00:19:47.443 Starting 1 thread 00:19:52.713 00:19:52.713 test: (groupid=0, jobs=1): err= 0: pid=74690: Tue Nov 19 08:38:31 2024 00:19:52.713 read: IOPS=985, BW=65.5MiB/s (68.6MB/s)(255MiB/3888msec) 00:19:52.713 slat (nsec): min=5684, max=38145, avg=7399.13, stdev=2890.00 00:19:52.713 clat (usec): min=301, max=1148, avg=454.87, stdev=61.24 00:19:52.713 lat (usec): min=307, max=1155, avg=462.27, stdev=61.98 00:19:52.713 clat percentiles (usec): 00:19:52.713 | 1.00th=[ 359], 5.00th=[ 379], 10.00th=[ 383], 20.00th=[ 392], 00:19:52.713 | 30.00th=[ 416], 40.00th=[ 449], 50.00th=[ 453], 60.00th=[ 461], 00:19:52.713 | 70.00th=[ 474], 80.00th=[ 502], 90.00th=[ 537], 95.00th=[ 562], 00:19:52.713 | 99.00th=[ 619], 99.50th=[ 652], 99.90th=[ 807], 99.95th=[ 922], 00:19:52.713 | 99.99th=[ 1156] 00:19:52.713 write: IOPS=992, BW=65.9MiB/s (69.1MB/s)(256MiB/3884msec); 0 zone resets 00:19:52.713 slat (usec): min=19, max=153, avg=24.59, stdev= 5.58 00:19:52.713 clat (usec): min=348, max=973, avg=510.20, stdev=73.42 00:19:52.713 lat (usec): min=369, max=995, avg=534.79, stdev=74.04 00:19:52.713 clat percentiles (usec): 00:19:52.713 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 457], 00:19:52.713 | 30.00th=[ 478], 40.00th=[ 482], 50.00th=[ 494], 60.00th=[ 519], 00:19:52.713 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 594], 95.00th=[ 627], 00:19:52.713 | 99.00th=[ 783], 99.50th=[ 832], 99.90th=[ 914], 99.95th=[ 955], 00:19:52.713 | 99.99th=[ 971] 00:19:52.713 bw ( KiB/s): min=64328, max=72216, per=99.06%, avg=66873.14, stdev=2766.87, samples=7 00:19:52.713 iops : min= 946, max= 1062, avg=983.43, stdev=40.69, samples=7 00:19:52.713 lat (usec) : 500=66.47%, 750=32.68%, 1000=0.83% 00:19:52.713 lat (msec) : 2=0.01% 00:19:52.713 cpu : usr=99.02%, sys=0.23%, ctx=30, majf=0, minf=1169 00:19:52.713 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.714 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:52.714 00:19:52.714 Run status group 0 (all jobs): 00:19:52.714 READ: bw=65.5MiB/s (68.6MB/s), 65.5MiB/s-65.5MiB/s (68.6MB/s-68.6MB/s), io=255MiB (267MB), run=3888-3888msec 00:19:52.714 WRITE: bw=65.9MiB/s (69.1MB/s), 65.9MiB/s-65.9MiB/s (69.1MB/s-69.1MB/s), io=256MiB (269MB), run=3884-3884msec 00:19:54.091 ----------------------------------------------------- 00:19:54.091 Suppressions used: 00:19:54.091 count bytes template 00:19:54.091 1 5 /usr/src/fio/parse.c 00:19:54.091 1 8 libtcmalloc_minimal.so 00:19:54.091 1 904 libcrypto.so 00:19:54.091 ----------------------------------------------------- 00:19:54.091 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:54.091 08:38:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:19:54.350 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:54.350 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:54.350 fio-3.35 00:19:54.350 Starting 2 threads 00:20:26.422 00:20:26.422 first_half: (groupid=0, jobs=1): err= 0: pid=74793: Tue Nov 19 08:39:02 2024 00:20:26.422 read: IOPS=2364, BW=9459KiB/s (9686kB/s)(256MiB/27687msec) 00:20:26.422 slat (nsec): min=4539, max=68076, avg=7373.53, stdev=1810.35 00:20:26.422 clat (usec): min=718, max=342410, avg=45883.89, stdev=28538.95 00:20:26.422 lat (usec): min=722, max=342419, avg=45891.26, stdev=28539.14 00:20:26.422 clat percentiles (msec): 00:20:26.422 | 1.00th=[ 11], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 38], 00:20:26.422 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:20:26.422 | 70.00th=[ 43], 80.00th=[ 45], 90.00th=[ 51], 95.00th=[ 88], 00:20:26.423 | 99.00th=[ 190], 99.50th=[ 215], 99.90th=[ 259], 99.95th=[ 300], 00:20:26.423 | 99.99th=[ 338] 00:20:26.423 write: IOPS=2370, BW=9481KiB/s (9709kB/s)(256MiB/27648msec); 0 zone resets 00:20:26.423 slat (usec): min=6, max=411, avg= 8.86, stdev= 5.41 00:20:26.423 clat (usec): min=480, max=56837, avg=8197.49, stdev=8370.89 00:20:26.423 lat (usec): min=491, max=56848, avg=8206.35, stdev=8371.20 00:20:26.423 clat percentiles (usec): 00:20:26.423 | 1.00th=[ 1090], 5.00th=[ 1532], 10.00th=[ 1860], 20.00th=[ 3294], 00:20:26.423 | 30.00th=[ 4228], 40.00th=[ 5473], 50.00th=[ 6259], 60.00th=[ 7111], 00:20:26.423 | 70.00th=[ 7832], 80.00th=[ 9503], 90.00th=[15008], 95.00th=[23200], 00:20:26.423 | 99.00th=[46924], 99.50th=[49546], 99.90th=[53740], 99.95th=[54789], 00:20:26.423 | 99.99th=[55837] 00:20:26.423 bw ( KiB/s): min= 192, max=48128, per=100.00%, avg=21705.12, stdev=12175.34, samples=24 00:20:26.423 iops : min= 48, max=12032, avg=5426.25, stdev=3043.85, samples=24 00:20:26.423 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.27% 00:20:26.423 lat (msec) : 2=5.48%, 4=7.84%, 10=27.01%, 20=7.76%, 50=46.37% 00:20:26.423 lat (msec) : 100=2.95%, 250=2.18%, 500=0.06% 00:20:26.423 cpu : usr=99.13%, sys=0.15%, ctx=53, majf=0, minf=5549 00:20:26.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:26.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.423 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:26.423 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:26.423 second_half: (groupid=0, jobs=1): err= 0: pid=74794: Tue Nov 19 08:39:02 2024 00:20:26.423 read: IOPS=2383, BW=9535KiB/s (9764kB/s)(256MiB/27473msec) 00:20:26.423 slat (nsec): min=4575, max=40522, avg=7377.08, stdev=1812.59 00:20:26.423 clat (msec): min=12, max=260, avg=46.37, stdev=25.53 00:20:26.423 lat (msec): min=12, max=260, avg=46.38, stdev=25.53 00:20:26.423 clat percentiles (msec): 00:20:26.423 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 38], 20.00th=[ 38], 00:20:26.423 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:20:26.423 | 70.00th=[ 44], 80.00th=[ 46], 90.00th=[ 53], 95.00th=[ 84], 00:20:26.423 | 99.00th=[ 184], 99.50th=[ 201], 99.90th=[ 232], 99.95th=[ 253], 00:20:26.423 | 99.99th=[ 259] 00:20:26.423 write: IOPS=2400, BW=9601KiB/s (9832kB/s)(256MiB/27303msec); 0 zone resets 00:20:26.423 slat (usec): min=5, max=545, avg= 8.60, stdev= 4.98 00:20:26.423 clat (usec): min=504, max=48296, avg=7293.78, stdev=4745.71 00:20:26.423 lat (usec): min=518, max=48306, avg=7302.38, stdev=4745.94 00:20:26.423 clat percentiles (usec): 00:20:26.423 | 1.00th=[ 1270], 5.00th=[ 2180], 10.00th=[ 3064], 20.00th=[ 3949], 00:20:26.423 | 30.00th=[ 4948], 40.00th=[ 5669], 50.00th=[ 6325], 60.00th=[ 6915], 00:20:26.423 | 70.00th=[ 7635], 80.00th=[ 8979], 90.00th=[13698], 95.00th=[15664], 00:20:26.423 | 99.00th=[25035], 99.50th=[32375], 99.90th=[43254], 99.95th=[44827], 00:20:26.423 | 99.99th=[46924] 00:20:26.423 bw ( KiB/s): min= 2496, max=43728, per=100.00%, avg=24783.90, stdev=14761.69, samples=21 00:20:26.423 iops : min= 624, max=10932, avg=6195.90, stdev=3690.46, samples=21 00:20:26.423 lat (usec) : 750=0.06%, 1000=0.15% 00:20:26.423 lat (msec) : 2=1.83%, 4=8.28%, 10=30.87%, 20=7.93%, 50=45.12% 00:20:26.423 lat (msec) : 100=3.72%, 250=2.01%, 500=0.03% 00:20:26.423 cpu : usr=99.10%, sys=0.19%, ctx=39, majf=0, minf=5562 00:20:26.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:26.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:26.423 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:26.423 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:26.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:26.423 00:20:26.423 Run status group 0 (all jobs): 00:20:26.423 READ: bw=18.5MiB/s (19.4MB/s), 9459KiB/s-9535KiB/s (9686kB/s-9764kB/s), io=512MiB (536MB), run=27473-27687msec 00:20:26.423 WRITE: bw=18.5MiB/s (19.4MB/s), 9481KiB/s-9601KiB/s (9709kB/s-9832kB/s), io=512MiB (537MB), run=27303-27648msec 00:20:26.423 ----------------------------------------------------- 00:20:26.423 Suppressions used: 00:20:26.423 count bytes template 00:20:26.423 2 10 /usr/src/fio/parse.c 00:20:26.423 2 192 /usr/src/fio/iolog.c 00:20:26.423 1 8 libtcmalloc_minimal.so 00:20:26.423 1 904 libcrypto.so 00:20:26.423 ----------------------------------------------------- 00:20:26.423 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:26.423 08:39:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:26.423 08:39:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:26.423 08:39:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:26.423 08:39:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:26.423 08:39:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:26.423 08:39:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:26.423 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:26.423 fio-3.35 00:20:26.423 Starting 1 thread 00:20:44.500 00:20:44.500 test: (groupid=0, jobs=1): err= 0: pid=75142: Tue Nov 19 08:39:22 2024 00:20:44.500 read: IOPS=6462, BW=25.2MiB/s (26.5MB/s)(255MiB/10090msec) 00:20:44.500 slat (nsec): min=4571, max=74477, avg=6634.12, stdev=1685.06 00:20:44.500 clat (usec): min=742, max=38556, avg=19796.10, stdev=1605.48 00:20:44.500 lat (usec): min=747, max=38576, avg=19802.73, stdev=1605.53 00:20:44.500 clat percentiles (usec): 00:20:44.500 | 1.00th=[18482], 5.00th=[18744], 10.00th=[18744], 20.00th=[19006], 00:20:44.500 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19268], 60.00th=[19530], 00:20:44.500 | 70.00th=[19792], 80.00th=[20055], 90.00th=[21627], 95.00th=[22414], 00:20:44.500 | 99.00th=[26084], 99.50th=[28181], 99.90th=[29230], 99.95th=[33817], 00:20:44.500 | 99.99th=[38011] 00:20:44.500 write: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(256MiB/5516msec); 0 zone resets 00:20:44.500 slat (usec): min=5, max=167, avg= 8.93, stdev= 4.15 00:20:44.500 clat (usec): min=609, max=68453, avg=10714.20, stdev=13339.29 00:20:44.501 lat (usec): min=635, max=68462, avg=10723.12, stdev=13339.31 00:20:44.501 clat percentiles (usec): 00:20:44.501 | 1.00th=[ 938], 5.00th=[ 1123], 10.00th=[ 1254], 20.00th=[ 1434], 00:20:44.501 | 30.00th=[ 1663], 40.00th=[ 2212], 50.00th=[ 7308], 60.00th=[ 8291], 00:20:44.501 | 70.00th=[ 9372], 80.00th=[10814], 90.00th=[38536], 95.00th=[41681], 00:20:44.501 | 99.00th=[46400], 99.50th=[49021], 99.90th=[56361], 99.95th=[57934], 00:20:44.501 | 99.99th=[63177] 00:20:44.501 bw ( KiB/s): min= 1016, max=62680, per=91.93%, avg=43690.67, stdev=15814.69, samples=12 00:20:44.501 iops : min= 254, max=15670, avg=10922.67, stdev=3953.67, samples=12 00:20:44.501 lat (usec) : 750=0.02%, 1000=0.91% 00:20:44.501 lat (msec) : 2=18.31%, 4=1.72%, 10=16.52%, 20=42.77%, 50=19.54% 00:20:44.501 lat (msec) : 100=0.21% 00:20:44.501 cpu : usr=99.00%, sys=0.18%, ctx=32, majf=0, minf=5565 00:20:44.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:44.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.501 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:44.501 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:44.501 00:20:44.501 Run status group 0 (all jobs): 00:20:44.501 READ: bw=25.2MiB/s (26.5MB/s), 25.2MiB/s-25.2MiB/s (26.5MB/s-26.5MB/s), io=255MiB (267MB), run=10090-10090msec 00:20:44.501 WRITE: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=256MiB (268MB), run=5516-5516msec 00:20:44.759 ----------------------------------------------------- 00:20:44.759 Suppressions used: 00:20:44.759 count bytes template 00:20:44.759 1 5 /usr/src/fio/parse.c 00:20:44.759 2 192 /usr/src/fio/iolog.c 00:20:44.759 1 8 libtcmalloc_minimal.so 00:20:44.759 1 904 libcrypto.so 00:20:44.759 ----------------------------------------------------- 00:20:44.759 00:20:44.759 08:39:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:44.759 08:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.759 08:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:44.759 08:39:23 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:44.759 08:39:23 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:44.759 08:39:23 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:44.759 Remove shared memory files 00:20:44.759 08:39:23 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:44.759 08:39:23 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:44.759 08:39:23 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58284 /dev/shm/spdk_tgt_trace.pid73377 00:20:44.759 08:39:23 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:44.759 08:39:23 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:44.759 ************************************ 00:20:44.759 END TEST ftl_fio_basic 00:20:44.759 ************************************ 00:20:44.759 00:20:44.759 real 1m13.847s 00:20:44.759 user 2m45.582s 00:20:44.759 sys 0m3.763s 00:20:44.759 08:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.759 08:39:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:44.759 08:39:24 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:44.759 08:39:24 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:44.760 08:39:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.760 08:39:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:44.760 ************************************ 00:20:44.760 START TEST ftl_bdevperf 00:20:44.760 ************************************ 00:20:44.760 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:45.019 * Looking for test storage... 00:20:45.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:45.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.019 --rc genhtml_branch_coverage=1 00:20:45.019 --rc genhtml_function_coverage=1 00:20:45.019 --rc genhtml_legend=1 00:20:45.019 --rc geninfo_all_blocks=1 00:20:45.019 --rc geninfo_unexecuted_blocks=1 00:20:45.019 00:20:45.019 ' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:45.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.019 --rc genhtml_branch_coverage=1 00:20:45.019 --rc genhtml_function_coverage=1 00:20:45.019 --rc genhtml_legend=1 00:20:45.019 --rc geninfo_all_blocks=1 00:20:45.019 --rc geninfo_unexecuted_blocks=1 00:20:45.019 00:20:45.019 ' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:45.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.019 --rc genhtml_branch_coverage=1 00:20:45.019 --rc genhtml_function_coverage=1 00:20:45.019 --rc genhtml_legend=1 00:20:45.019 --rc geninfo_all_blocks=1 00:20:45.019 --rc geninfo_unexecuted_blocks=1 00:20:45.019 00:20:45.019 ' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:45.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.019 --rc genhtml_branch_coverage=1 00:20:45.019 --rc genhtml_function_coverage=1 00:20:45.019 --rc genhtml_legend=1 00:20:45.019 --rc geninfo_all_blocks=1 00:20:45.019 --rc geninfo_unexecuted_blocks=1 00:20:45.019 00:20:45.019 ' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75403 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75403 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75403 ']' 00:20:45.019 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.020 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.020 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.020 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.020 08:39:24 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:45.297 [2024-11-19 08:39:24.320753] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:20:45.297 [2024-11-19 08:39:24.321171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75403 ] 00:20:45.297 [2024-11-19 08:39:24.511811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.589 [2024-11-19 08:39:24.637763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.163 08:39:25 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.163 08:39:25 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:20:46.163 08:39:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:46.163 08:39:25 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:20:46.163 08:39:25 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:46.163 08:39:25 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:20:46.163 08:39:25 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:20:46.163 08:39:25 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:46.729 08:39:25 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:46.729 08:39:25 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:20:46.729 08:39:25 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:46.729 08:39:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:46.729 08:39:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:46.730 08:39:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:46.730 08:39:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:46.730 08:39:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:46.988 08:39:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:46.988 { 00:20:46.989 "name": "nvme0n1", 00:20:46.989 "aliases": [ 00:20:46.989 "eb7fd84e-39df-4dea-90e3-0b29bb201bd1" 00:20:46.989 ], 00:20:46.989 "product_name": "NVMe disk", 00:20:46.989 "block_size": 4096, 00:20:46.989 "num_blocks": 1310720, 00:20:46.989 "uuid": "eb7fd84e-39df-4dea-90e3-0b29bb201bd1", 00:20:46.989 "numa_id": -1, 00:20:46.989 "assigned_rate_limits": { 00:20:46.989 "rw_ios_per_sec": 0, 00:20:46.989 "rw_mbytes_per_sec": 0, 00:20:46.989 "r_mbytes_per_sec": 0, 00:20:46.989 "w_mbytes_per_sec": 0 00:20:46.989 }, 00:20:46.989 "claimed": true, 00:20:46.989 "claim_type": "read_many_write_one", 00:20:46.989 "zoned": false, 00:20:46.989 "supported_io_types": { 00:20:46.989 "read": true, 00:20:46.989 "write": true, 00:20:46.989 "unmap": true, 00:20:46.989 "flush": true, 00:20:46.989 "reset": true, 00:20:46.989 "nvme_admin": true, 00:20:46.989 "nvme_io": true, 00:20:46.989 "nvme_io_md": false, 00:20:46.989 "write_zeroes": true, 00:20:46.989 "zcopy": false, 00:20:46.989 "get_zone_info": false, 00:20:46.989 "zone_management": false, 00:20:46.989 "zone_append": false, 00:20:46.989 "compare": true, 00:20:46.989 "compare_and_write": false, 00:20:46.989 "abort": true, 00:20:46.989 "seek_hole": false, 00:20:46.989 "seek_data": false, 00:20:46.989 "copy": true, 00:20:46.989 "nvme_iov_md": false 00:20:46.989 }, 00:20:46.989 "driver_specific": { 00:20:46.989 "nvme": [ 00:20:46.989 { 00:20:46.989 "pci_address": "0000:00:11.0", 00:20:46.989 "trid": { 00:20:46.989 "trtype": "PCIe", 00:20:46.989 "traddr": "0000:00:11.0" 00:20:46.989 }, 00:20:46.989 "ctrlr_data": { 00:20:46.989 "cntlid": 0, 00:20:46.989 "vendor_id": "0x1b36", 00:20:46.989 "model_number": "QEMU NVMe Ctrl", 00:20:46.989 "serial_number": "12341", 00:20:46.989 "firmware_revision": "8.0.0", 00:20:46.989 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:46.989 "oacs": { 00:20:46.989 "security": 0, 00:20:46.989 "format": 1, 00:20:46.989 "firmware": 0, 00:20:46.989 "ns_manage": 1 00:20:46.989 }, 00:20:46.989 "multi_ctrlr": false, 00:20:46.989 "ana_reporting": false 00:20:46.989 }, 00:20:46.989 "vs": { 00:20:46.989 "nvme_version": "1.4" 00:20:46.989 }, 00:20:46.989 "ns_data": { 00:20:46.989 "id": 1, 00:20:46.989 "can_share": false 00:20:46.989 } 00:20:46.989 } 00:20:46.989 ], 00:20:46.989 "mp_policy": "active_passive" 00:20:46.989 } 00:20:46.989 } 00:20:46.989 ]' 00:20:46.989 08:39:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:46.989 08:39:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:46.989 08:39:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:46.989 08:39:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:46.989 08:39:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:46.989 08:39:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:20:46.989 08:39:26 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:20:46.989 08:39:26 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:46.989 08:39:26 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:20:46.989 08:39:26 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:46.989 08:39:26 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:47.247 08:39:26 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=194d2908-08b9-493e-a6c4-e2f72b370ee1 00:20:47.247 08:39:26 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:20:47.247 08:39:26 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 194d2908-08b9-493e-a6c4-e2f72b370ee1 00:20:47.505 08:39:26 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:48.071 08:39:27 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=878ff754-395e-4762-b954-26761ad78b72 00:20:48.071 08:39:27 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 878ff754-395e-4762-b954-26761ad78b72 00:20:48.329 08:39:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=46d1117d-0ee5-47d9-be22-46a1d44e40bb 00:20:48.329 08:39:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 46d1117d-0ee5-47d9-be22-46a1d44e40bb 00:20:48.329 08:39:27 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:20:48.329 08:39:27 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:48.329 08:39:27 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=46d1117d-0ee5-47d9-be22-46a1d44e40bb 00:20:48.329 08:39:27 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:20:48.329 08:39:27 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 46d1117d-0ee5-47d9-be22-46a1d44e40bb 00:20:48.329 08:39:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=46d1117d-0ee5-47d9-be22-46a1d44e40bb 00:20:48.330 08:39:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:48.330 08:39:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:48.330 08:39:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:48.330 08:39:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 46d1117d-0ee5-47d9-be22-46a1d44e40bb 00:20:48.330 08:39:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:48.330 { 00:20:48.330 "name": "46d1117d-0ee5-47d9-be22-46a1d44e40bb", 00:20:48.330 "aliases": [ 00:20:48.330 "lvs/nvme0n1p0" 00:20:48.330 ], 00:20:48.330 "product_name": "Logical Volume", 00:20:48.330 "block_size": 4096, 00:20:48.330 "num_blocks": 26476544, 00:20:48.330 "uuid": "46d1117d-0ee5-47d9-be22-46a1d44e40bb", 00:20:48.330 "assigned_rate_limits": { 00:20:48.330 "rw_ios_per_sec": 0, 00:20:48.330 "rw_mbytes_per_sec": 0, 00:20:48.330 "r_mbytes_per_sec": 0, 00:20:48.330 "w_mbytes_per_sec": 0 00:20:48.330 }, 00:20:48.330 "claimed": false, 00:20:48.330 "zoned": false, 00:20:48.330 "supported_io_types": { 00:20:48.330 "read": true, 00:20:48.330 "write": true, 00:20:48.330 "unmap": true, 00:20:48.330 "flush": false, 00:20:48.330 "reset": true, 00:20:48.330 "nvme_admin": false, 00:20:48.330 "nvme_io": false, 00:20:48.330 "nvme_io_md": false, 00:20:48.330 "write_zeroes": true, 00:20:48.330 "zcopy": false, 00:20:48.330 "get_zone_info": false, 00:20:48.330 "zone_management": false, 00:20:48.330 "zone_append": false, 00:20:48.330 "compare": false, 00:20:48.330 "compare_and_write": false, 00:20:48.330 "abort": false, 00:20:48.330 "seek_hole": true, 00:20:48.330 "seek_data": true, 00:20:48.330 "copy": false, 00:20:48.330 "nvme_iov_md": false 00:20:48.330 }, 00:20:48.330 "driver_specific": { 00:20:48.330 "lvol": { 00:20:48.330 "lvol_store_uuid": "878ff754-395e-4762-b954-26761ad78b72", 00:20:48.330 "base_bdev": "nvme0n1", 00:20:48.330 "thin_provision": true, 00:20:48.330 "num_allocated_clusters": 0, 00:20:48.330 "snapshot": false, 00:20:48.330 "clone": false, 00:20:48.330 "esnap_clone": false 00:20:48.330 } 00:20:48.330 } 00:20:48.330 } 00:20:48.330 ]' 00:20:48.330 08:39:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:48.588 08:39:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:48.588 08:39:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:48.588 08:39:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:48.588 08:39:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:48.588 08:39:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:48.588 08:39:27 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:20:48.588 08:39:27 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:20:48.588 08:39:27 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:48.846 08:39:28 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:48.846 08:39:28 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:48.846 08:39:28 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 46d1117d-0ee5-47d9-be22-46a1d44e40bb 00:20:48.846 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=46d1117d-0ee5-47d9-be22-46a1d44e40bb 00:20:48.846 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:48.846 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:48.846 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:48.847 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 46d1117d-0ee5-47d9-be22-46a1d44e40bb 00:20:49.105 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:49.105 { 00:20:49.105 "name": "46d1117d-0ee5-47d9-be22-46a1d44e40bb", 00:20:49.105 "aliases": [ 00:20:49.105 "lvs/nvme0n1p0" 00:20:49.105 ], 00:20:49.105 "product_name": "Logical Volume", 00:20:49.105 "block_size": 4096, 00:20:49.105 "num_blocks": 26476544, 00:20:49.105 "uuid": "46d1117d-0ee5-47d9-be22-46a1d44e40bb", 00:20:49.105 "assigned_rate_limits": { 00:20:49.105 "rw_ios_per_sec": 0, 00:20:49.105 "rw_mbytes_per_sec": 0, 00:20:49.105 "r_mbytes_per_sec": 0, 00:20:49.105 "w_mbytes_per_sec": 0 00:20:49.105 }, 00:20:49.105 "claimed": false, 00:20:49.105 "zoned": false, 00:20:49.105 "supported_io_types": { 00:20:49.105 "read": true, 00:20:49.105 "write": true, 00:20:49.105 "unmap": true, 00:20:49.105 "flush": false, 00:20:49.105 "reset": true, 00:20:49.105 "nvme_admin": false, 00:20:49.105 "nvme_io": false, 00:20:49.105 "nvme_io_md": false, 00:20:49.105 "write_zeroes": true, 00:20:49.105 "zcopy": false, 00:20:49.105 "get_zone_info": false, 00:20:49.105 "zone_management": false, 00:20:49.105 "zone_append": false, 00:20:49.105 "compare": false, 00:20:49.105 "compare_and_write": false, 00:20:49.105 "abort": false, 00:20:49.105 "seek_hole": true, 00:20:49.105 "seek_data": true, 00:20:49.105 "copy": false, 00:20:49.105 "nvme_iov_md": false 00:20:49.105 }, 00:20:49.105 "driver_specific": { 00:20:49.105 "lvol": { 00:20:49.105 "lvol_store_uuid": "878ff754-395e-4762-b954-26761ad78b72", 00:20:49.105 "base_bdev": "nvme0n1", 00:20:49.105 "thin_provision": true, 00:20:49.105 "num_allocated_clusters": 0, 00:20:49.105 "snapshot": false, 00:20:49.105 "clone": false, 00:20:49.105 "esnap_clone": false 00:20:49.105 } 00:20:49.105 } 00:20:49.105 } 00:20:49.105 ]' 00:20:49.105 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:49.105 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:49.105 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:49.363 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:49.363 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:49.363 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:49.363 08:39:28 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:20:49.363 08:39:28 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:49.621 08:39:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:20:49.621 08:39:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 46d1117d-0ee5-47d9-be22-46a1d44e40bb 00:20:49.621 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=46d1117d-0ee5-47d9-be22-46a1d44e40bb 00:20:49.621 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:49.621 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:20:49.621 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:20:49.621 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 46d1117d-0ee5-47d9-be22-46a1d44e40bb 00:20:49.879 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:49.879 { 00:20:49.879 "name": "46d1117d-0ee5-47d9-be22-46a1d44e40bb", 00:20:49.879 "aliases": [ 00:20:49.879 "lvs/nvme0n1p0" 00:20:49.879 ], 00:20:49.879 "product_name": "Logical Volume", 00:20:49.879 "block_size": 4096, 00:20:49.879 "num_blocks": 26476544, 00:20:49.879 "uuid": "46d1117d-0ee5-47d9-be22-46a1d44e40bb", 00:20:49.879 "assigned_rate_limits": { 00:20:49.879 "rw_ios_per_sec": 0, 00:20:49.879 "rw_mbytes_per_sec": 0, 00:20:49.879 "r_mbytes_per_sec": 0, 00:20:49.879 "w_mbytes_per_sec": 0 00:20:49.879 }, 00:20:49.879 "claimed": false, 00:20:49.879 "zoned": false, 00:20:49.879 "supported_io_types": { 00:20:49.879 "read": true, 00:20:49.879 "write": true, 00:20:49.879 "unmap": true, 00:20:49.879 "flush": false, 00:20:49.879 "reset": true, 00:20:49.879 "nvme_admin": false, 00:20:49.879 "nvme_io": false, 00:20:49.879 "nvme_io_md": false, 00:20:49.879 "write_zeroes": true, 00:20:49.879 "zcopy": false, 00:20:49.879 "get_zone_info": false, 00:20:49.879 "zone_management": false, 00:20:49.879 "zone_append": false, 00:20:49.879 "compare": false, 00:20:49.879 "compare_and_write": false, 00:20:49.879 "abort": false, 00:20:49.879 "seek_hole": true, 00:20:49.879 "seek_data": true, 00:20:49.879 "copy": false, 00:20:49.879 "nvme_iov_md": false 00:20:49.879 }, 00:20:49.879 "driver_specific": { 00:20:49.879 "lvol": { 00:20:49.879 "lvol_store_uuid": "878ff754-395e-4762-b954-26761ad78b72", 00:20:49.879 "base_bdev": "nvme0n1", 00:20:49.879 "thin_provision": true, 00:20:49.879 "num_allocated_clusters": 0, 00:20:49.879 "snapshot": false, 00:20:49.879 "clone": false, 00:20:49.879 "esnap_clone": false 00:20:49.879 } 00:20:49.879 } 00:20:49.879 } 00:20:49.879 ]' 00:20:49.879 08:39:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:49.879 08:39:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:20:49.879 08:39:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:49.879 08:39:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:49.879 08:39:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:49.879 08:39:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:20:49.879 08:39:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:20:49.879 08:39:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 46d1117d-0ee5-47d9-be22-46a1d44e40bb -c nvc0n1p0 --l2p_dram_limit 20 00:20:50.137 [2024-11-19 08:39:29.313364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.137 [2024-11-19 08:39:29.313422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:50.137 [2024-11-19 08:39:29.313459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:50.137 [2024-11-19 08:39:29.313472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.137 [2024-11-19 08:39:29.313545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.137 [2024-11-19 08:39:29.313569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:50.137 [2024-11-19 08:39:29.313582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:50.137 [2024-11-19 08:39:29.313594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.137 [2024-11-19 08:39:29.313682] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:50.137 [2024-11-19 08:39:29.314750] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:50.137 [2024-11-19 08:39:29.314785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.137 [2024-11-19 08:39:29.314803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:50.137 [2024-11-19 08:39:29.314817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.172 ms 00:20:50.137 [2024-11-19 08:39:29.314830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.137 [2024-11-19 08:39:29.314961] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 44c33120-2d75-4d82-af61-9f3dc70cbed2 00:20:50.137 [2024-11-19 08:39:29.316035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.137 [2024-11-19 08:39:29.316277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:50.137 [2024-11-19 08:39:29.316310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:50.138 [2024-11-19 08:39:29.316326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.138 [2024-11-19 08:39:29.320954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.138 [2024-11-19 08:39:29.321011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:50.138 [2024-11-19 08:39:29.321047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.573 ms 00:20:50.138 [2024-11-19 08:39:29.321057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.138 [2024-11-19 08:39:29.321171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.138 [2024-11-19 08:39:29.321190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:50.138 [2024-11-19 08:39:29.321208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:50.138 [2024-11-19 08:39:29.321220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.138 [2024-11-19 08:39:29.321297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.138 [2024-11-19 08:39:29.321314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:50.138 [2024-11-19 08:39:29.321328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:50.138 [2024-11-19 08:39:29.321339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.138 [2024-11-19 08:39:29.321369] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:50.138 [2024-11-19 08:39:29.325832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.138 [2024-11-19 08:39:29.326072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:50.138 [2024-11-19 08:39:29.326102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.474 ms 00:20:50.138 [2024-11-19 08:39:29.326119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.138 [2024-11-19 08:39:29.326178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.138 [2024-11-19 08:39:29.326199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:50.138 [2024-11-19 08:39:29.326212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:50.138 [2024-11-19 08:39:29.326225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.138 [2024-11-19 08:39:29.326267] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:50.138 [2024-11-19 08:39:29.326456] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:50.138 [2024-11-19 08:39:29.326474] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:50.138 [2024-11-19 08:39:29.326492] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:50.138 [2024-11-19 08:39:29.326507] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:50.138 [2024-11-19 08:39:29.326522] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:50.138 [2024-11-19 08:39:29.326549] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:50.138 [2024-11-19 08:39:29.326561] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:50.138 [2024-11-19 08:39:29.326572] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:50.138 [2024-11-19 08:39:29.326584] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:50.138 [2024-11-19 08:39:29.326596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.138 [2024-11-19 08:39:29.326612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:50.138 [2024-11-19 08:39:29.326624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:20:50.138 [2024-11-19 08:39:29.326636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.138 [2024-11-19 08:39:29.326761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.138 [2024-11-19 08:39:29.326802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:50.138 [2024-11-19 08:39:29.326830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:50.138 [2024-11-19 08:39:29.326846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.138 [2024-11-19 08:39:29.326945] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:50.138 [2024-11-19 08:39:29.326981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:50.138 [2024-11-19 08:39:29.326996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.138 [2024-11-19 08:39:29.327010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.138 [2024-11-19 08:39:29.327022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:50.138 [2024-11-19 08:39:29.327034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:50.138 [2024-11-19 08:39:29.327045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:50.138 [2024-11-19 08:39:29.327058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:50.138 [2024-11-19 08:39:29.327069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:50.138 [2024-11-19 08:39:29.327081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.138 [2024-11-19 08:39:29.327107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:50.138 [2024-11-19 08:39:29.327130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:50.138 [2024-11-19 08:39:29.327140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.138 [2024-11-19 08:39:29.327165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:50.138 [2024-11-19 08:39:29.327182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:50.138 [2024-11-19 08:39:29.327197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.138 [2024-11-19 08:39:29.327208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:50.138 [2024-11-19 08:39:29.327220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:50.138 [2024-11-19 08:39:29.327230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.138 [2024-11-19 08:39:29.327245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:50.138 [2024-11-19 08:39:29.327256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:50.138 [2024-11-19 08:39:29.327267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.138 [2024-11-19 08:39:29.327278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:50.138 [2024-11-19 08:39:29.327290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:50.138 [2024-11-19 08:39:29.327300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.138 [2024-11-19 08:39:29.327312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:50.138 [2024-11-19 08:39:29.327322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:50.138 [2024-11-19 08:39:29.327334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.138 [2024-11-19 08:39:29.327345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:50.138 [2024-11-19 08:39:29.327357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:50.138 [2024-11-19 08:39:29.327367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.138 [2024-11-19 08:39:29.327381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:50.138 [2024-11-19 08:39:29.327392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:50.138 [2024-11-19 08:39:29.327404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.138 [2024-11-19 08:39:29.327414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:50.138 [2024-11-19 08:39:29.327426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:50.138 [2024-11-19 08:39:29.327436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.138 [2024-11-19 08:39:29.327448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:50.138 [2024-11-19 08:39:29.327459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:50.138 [2024-11-19 08:39:29.327501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.138 [2024-11-19 08:39:29.327513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:50.138 [2024-11-19 08:39:29.327526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:50.138 [2024-11-19 08:39:29.327536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.138 [2024-11-19 08:39:29.327548] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:50.138 [2024-11-19 08:39:29.327560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:50.138 [2024-11-19 08:39:29.327573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.138 [2024-11-19 08:39:29.327587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.138 [2024-11-19 08:39:29.327605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:50.138 [2024-11-19 08:39:29.327617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:50.138 [2024-11-19 08:39:29.327645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:50.138 [2024-11-19 08:39:29.327659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:50.138 [2024-11-19 08:39:29.327672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:50.138 [2024-11-19 08:39:29.327683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:50.138 [2024-11-19 08:39:29.327700] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:50.138 [2024-11-19 08:39:29.327714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.138 [2024-11-19 08:39:29.327729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:50.138 [2024-11-19 08:39:29.327740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:50.138 [2024-11-19 08:39:29.327753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:50.138 [2024-11-19 08:39:29.327765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:50.138 [2024-11-19 08:39:29.327778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:50.138 [2024-11-19 08:39:29.327789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:50.138 [2024-11-19 08:39:29.327802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:50.138 [2024-11-19 08:39:29.327813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:50.138 [2024-11-19 08:39:29.327828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:50.138 [2024-11-19 08:39:29.327841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:50.138 [2024-11-19 08:39:29.327854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:50.138 [2024-11-19 08:39:29.327865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:50.138 [2024-11-19 08:39:29.327878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:50.138 [2024-11-19 08:39:29.327890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:50.138 [2024-11-19 08:39:29.327904] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:50.138 [2024-11-19 08:39:29.327916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.138 [2024-11-19 08:39:29.327932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:50.138 [2024-11-19 08:39:29.327944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:50.138 [2024-11-19 08:39:29.327958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:50.138 [2024-11-19 08:39:29.327984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:50.138 [2024-11-19 08:39:29.327999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.138 [2024-11-19 08:39:29.328012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:50.138 [2024-11-19 08:39:29.328026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:20:50.138 [2024-11-19 08:39:29.328039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.138 [2024-11-19 08:39:29.328090] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:50.139 [2024-11-19 08:39:29.328112] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:52.671 [2024-11-19 08:39:31.457123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.457449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:52.671 [2024-11-19 08:39:31.457589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2129.040 ms 00:20:52.671 [2024-11-19 08:39:31.457687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.489034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.489339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.671 [2024-11-19 08:39:31.489472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.957 ms 00:20:52.671 [2024-11-19 08:39:31.489641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.489954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.490080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:52.671 [2024-11-19 08:39:31.490203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:52.671 [2024-11-19 08:39:31.490340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.536629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.536881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.671 [2024-11-19 08:39:31.536926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.183 ms 00:20:52.671 [2024-11-19 08:39:31.536942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.537005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.537027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.671 [2024-11-19 08:39:31.537042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:52.671 [2024-11-19 08:39:31.537054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.537480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.537499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.671 [2024-11-19 08:39:31.537514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:20:52.671 [2024-11-19 08:39:31.537525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.537701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.537720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.671 [2024-11-19 08:39:31.537737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:20:52.671 [2024-11-19 08:39:31.537748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.554274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.554330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.671 [2024-11-19 08:39:31.554367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.485 ms 00:20:52.671 [2024-11-19 08:39:31.554378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.567094] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:20:52.671 [2024-11-19 08:39:31.572080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.572289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:52.671 [2024-11-19 08:39:31.572318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.588 ms 00:20:52.671 [2024-11-19 08:39:31.572334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.635053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.635126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:52.671 [2024-11-19 08:39:31.635164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.678 ms 00:20:52.671 [2024-11-19 08:39:31.635177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.635378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.635402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:52.671 [2024-11-19 08:39:31.635415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:20:52.671 [2024-11-19 08:39:31.635427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.665177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.665223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:52.671 [2024-11-19 08:39:31.665259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.640 ms 00:20:52.671 [2024-11-19 08:39:31.665272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.694425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.694631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:52.671 [2024-11-19 08:39:31.694661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.110 ms 00:20:52.671 [2024-11-19 08:39:31.694677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.695519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.695548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:52.671 [2024-11-19 08:39:31.695563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:20:52.671 [2024-11-19 08:39:31.695576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.775313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.775396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:52.671 [2024-11-19 08:39:31.775417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.659 ms 00:20:52.671 [2024-11-19 08:39:31.775431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.806067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.806131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:52.671 [2024-11-19 08:39:31.806150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.522 ms 00:20:52.671 [2024-11-19 08:39:31.806167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.835932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.836133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:52.671 [2024-11-19 08:39:31.836160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.722 ms 00:20:52.671 [2024-11-19 08:39:31.836175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.866306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.866523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:52.671 [2024-11-19 08:39:31.866552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.073 ms 00:20:52.671 [2024-11-19 08:39:31.866567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.866659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.866686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:52.671 [2024-11-19 08:39:31.866700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:52.671 [2024-11-19 08:39:31.866714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.866836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.671 [2024-11-19 08:39:31.866874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:52.671 [2024-11-19 08:39:31.866887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:52.671 [2024-11-19 08:39:31.866899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.671 [2024-11-19 08:39:31.868061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2554.151 ms, result 0 00:20:52.671 { 00:20:52.671 "name": "ftl0", 00:20:52.671 "uuid": "44c33120-2d75-4d82-af61-9f3dc70cbed2" 00:20:52.671 } 00:20:52.671 08:39:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:20:52.671 08:39:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:20:52.671 08:39:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:20:52.930 08:39:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:20:53.190 [2024-11-19 08:39:32.336564] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:53.190 I/O size of 69632 is greater than zero copy threshold (65536). 00:20:53.190 Zero copy mechanism will not be used. 00:20:53.190 Running I/O for 4 seconds... 00:20:55.059 1763.00 IOPS, 117.07 MiB/s [2024-11-19T08:39:35.731Z] 1835.50 IOPS, 121.89 MiB/s [2024-11-19T08:39:36.666Z] 1838.67 IOPS, 122.10 MiB/s [2024-11-19T08:39:36.666Z] 1837.25 IOPS, 122.00 MiB/s 00:20:57.370 Latency(us) 00:20:57.370 [2024-11-19T08:39:36.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.370 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:20:57.370 ftl0 : 4.00 1836.49 121.95 0.00 0.00 569.16 231.80 2546.97 00:20:57.370 [2024-11-19T08:39:36.666Z] =================================================================================================================== 00:20:57.370 [2024-11-19T08:39:36.666Z] Total : 1836.49 121.95 0.00 0.00 569.16 231.80 2546.97 00:20:57.370 [2024-11-19 08:39:36.349201] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:20:57.370 { 00:20:57.370 "results": [ 00:20:57.370 { 00:20:57.370 "job": "ftl0", 00:20:57.370 "core_mask": "0x1", 00:20:57.370 "workload": "randwrite", 00:20:57.370 "status": "finished", 00:20:57.370 "queue_depth": 1, 00:20:57.370 "io_size": 69632, 00:20:57.370 "runtime": 4.002208, 00:20:57.370 "iops": 1836.4862595847092, 00:20:57.370 "mibps": 121.95416567554709, 00:20:57.370 "io_failed": 0, 00:20:57.370 "io_timeout": 0, 00:20:57.370 "avg_latency_us": 569.1554750773037, 00:20:57.370 "min_latency_us": 231.79636363636362, 00:20:57.370 "max_latency_us": 2546.9672727272728 00:20:57.370 } 00:20:57.370 ], 00:20:57.370 "core_count": 1 00:20:57.370 } 00:20:57.370 08:39:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:20:57.370 [2024-11-19 08:39:36.496220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:20:57.370 Running I/O for 4 seconds... 00:20:59.239 7665.00 IOPS, 29.94 MiB/s [2024-11-19T08:39:39.908Z] 7426.50 IOPS, 29.01 MiB/s [2024-11-19T08:39:40.841Z] 7386.00 IOPS, 28.85 MiB/s [2024-11-19T08:39:40.841Z] 7332.25 IOPS, 28.64 MiB/s 00:21:01.545 Latency(us) 00:21:01.545 [2024-11-19T08:39:40.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.545 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:01.545 ftl0 : 4.02 7321.16 28.60 0.00 0.00 17432.23 353.75 33840.41 00:21:01.545 [2024-11-19T08:39:40.841Z] =================================================================================================================== 00:21:01.545 [2024-11-19T08:39:40.841Z] Total : 7321.16 28.60 0.00 0.00 17432.23 0.00 33840.41 00:21:01.545 { 00:21:01.545 "results": [ 00:21:01.545 { 00:21:01.545 "job": "ftl0", 00:21:01.545 "core_mask": "0x1", 00:21:01.545 "workload": "randwrite", 00:21:01.545 "status": "finished", 00:21:01.545 "queue_depth": 128, 00:21:01.545 "io_size": 4096, 00:21:01.545 "runtime": 4.023272, 00:21:01.545 "iops": 7321.155517200925, 00:21:01.545 "mibps": 28.598263739066113, 00:21:01.545 "io_failed": 0, 00:21:01.545 "io_timeout": 0, 00:21:01.545 "avg_latency_us": 17432.230026079844, 00:21:01.545 "min_latency_us": 353.74545454545455, 00:21:01.545 "max_latency_us": 33840.40727272727 00:21:01.545 } 00:21:01.546 ], 00:21:01.546 "core_count": 1 00:21:01.546 } 00:21:01.546 [2024-11-19 08:39:40.531098] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:01.546 08:39:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:01.546 [2024-11-19 08:39:40.677165] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:01.546 Running I/O for 4 seconds... 00:21:03.411 5843.00 IOPS, 22.82 MiB/s [2024-11-19T08:39:44.136Z] 6048.00 IOPS, 23.62 MiB/s [2024-11-19T08:39:44.702Z] 5977.33 IOPS, 23.35 MiB/s [2024-11-19T08:39:44.960Z] 6007.75 IOPS, 23.47 MiB/s 00:21:05.664 Latency(us) 00:21:05.664 [2024-11-19T08:39:44.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.664 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:05.664 Verification LBA range: start 0x0 length 0x1400000 00:21:05.664 ftl0 : 4.01 6018.85 23.51 0.00 0.00 21190.94 363.05 29908.25 00:21:05.664 [2024-11-19T08:39:44.960Z] =================================================================================================================== 00:21:05.664 [2024-11-19T08:39:44.960Z] Total : 6018.85 23.51 0.00 0.00 21190.94 0.00 29908.25 00:21:05.664 { 00:21:05.664 "results": [ 00:21:05.664 { 00:21:05.664 "job": "ftl0", 00:21:05.664 "core_mask": "0x1", 00:21:05.664 "workload": "verify", 00:21:05.664 "status": "finished", 00:21:05.664 "verify_range": { 00:21:05.664 "start": 0, 00:21:05.664 "length": 20971520 00:21:05.664 }, 00:21:05.664 "queue_depth": 128, 00:21:05.664 "io_size": 4096, 00:21:05.664 "runtime": 4.013726, 00:21:05.664 "iops": 6018.846328822645, 00:21:05.664 "mibps": 23.511118471963456, 00:21:05.664 "io_failed": 0, 00:21:05.664 "io_timeout": 0, 00:21:05.664 "avg_latency_us": 21190.941530680597, 00:21:05.664 "min_latency_us": 363.05454545454546, 00:21:05.664 "max_latency_us": 29908.247272727273 00:21:05.664 } 00:21:05.664 ], 00:21:05.664 "core_count": 1 00:21:05.664 } 00:21:05.664 [2024-11-19 08:39:44.710117] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:05.664 08:39:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:05.923 [2024-11-19 08:39:45.011263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.923 [2024-11-19 08:39:45.011341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:05.923 [2024-11-19 08:39:45.011381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:05.923 [2024-11-19 08:39:45.011395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.923 [2024-11-19 08:39:45.011428] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:05.923 [2024-11-19 08:39:45.014728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.923 [2024-11-19 08:39:45.014762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:05.923 [2024-11-19 08:39:45.014797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.272 ms 00:21:05.923 [2024-11-19 08:39:45.014808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.923 [2024-11-19 08:39:45.016469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.923 [2024-11-19 08:39:45.016515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:05.923 [2024-11-19 08:39:45.016557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.610 ms 00:21:05.923 [2024-11-19 08:39:45.016570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.923 [2024-11-19 08:39:45.196205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.923 [2024-11-19 08:39:45.196495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:05.923 [2024-11-19 08:39:45.196539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 179.592 ms 00:21:05.923 [2024-11-19 08:39:45.196555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.923 [2024-11-19 08:39:45.203407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.923 [2024-11-19 08:39:45.203443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:05.923 [2024-11-19 08:39:45.203463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.774 ms 00:21:05.923 [2024-11-19 08:39:45.203483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.182 [2024-11-19 08:39:45.234916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.182 [2024-11-19 08:39:45.234979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:06.182 [2024-11-19 08:39:45.235002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.355 ms 00:21:06.182 [2024-11-19 08:39:45.235015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.182 [2024-11-19 08:39:45.253743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.182 [2024-11-19 08:39:45.253787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:06.182 [2024-11-19 08:39:45.253828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.676 ms 00:21:06.182 [2024-11-19 08:39:45.253841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.182 [2024-11-19 08:39:45.254020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.182 [2024-11-19 08:39:45.254043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:06.182 [2024-11-19 08:39:45.254062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:21:06.182 [2024-11-19 08:39:45.254074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.182 [2024-11-19 08:39:45.285506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.182 [2024-11-19 08:39:45.285548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:06.182 [2024-11-19 08:39:45.285585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.407 ms 00:21:06.182 [2024-11-19 08:39:45.285597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.182 [2024-11-19 08:39:45.316768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.182 [2024-11-19 08:39:45.316816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:06.182 [2024-11-19 08:39:45.316837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.085 ms 00:21:06.182 [2024-11-19 08:39:45.316850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.182 [2024-11-19 08:39:45.348052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.182 [2024-11-19 08:39:45.348232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:06.182 [2024-11-19 08:39:45.348269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.152 ms 00:21:06.182 [2024-11-19 08:39:45.348282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.182 [2024-11-19 08:39:45.379640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.182 [2024-11-19 08:39:45.379684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:06.182 [2024-11-19 08:39:45.379708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.217 ms 00:21:06.182 [2024-11-19 08:39:45.379721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.182 [2024-11-19 08:39:45.379772] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:06.182 [2024-11-19 08:39:45.379796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:06.182 [2024-11-19 08:39:45.379986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.380986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:06.183 [2024-11-19 08:39:45.381000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:06.184 [2024-11-19 08:39:45.381012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:06.184 [2024-11-19 08:39:45.381026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:06.184 [2024-11-19 08:39:45.381038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:06.184 [2024-11-19 08:39:45.381051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:06.184 [2024-11-19 08:39:45.381063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:06.184 [2024-11-19 08:39:45.381084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:06.184 [2024-11-19 08:39:45.381096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:06.184 [2024-11-19 08:39:45.381110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:06.184 [2024-11-19 08:39:45.381122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:06.184 [2024-11-19 08:39:45.381137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:06.184 [2024-11-19 08:39:45.381150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:06.184 [2024-11-19 08:39:45.381164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:06.184 [2024-11-19 08:39:45.381185] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:06.184 [2024-11-19 08:39:45.381199] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 44c33120-2d75-4d82-af61-9f3dc70cbed2 00:21:06.184 [2024-11-19 08:39:45.381211] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:06.184 [2024-11-19 08:39:45.381224] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:06.184 [2024-11-19 08:39:45.381238] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:06.184 [2024-11-19 08:39:45.381252] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:06.184 [2024-11-19 08:39:45.381263] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:06.184 [2024-11-19 08:39:45.381276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:06.184 [2024-11-19 08:39:45.381287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:06.184 [2024-11-19 08:39:45.381301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:06.184 [2024-11-19 08:39:45.381311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:06.184 [2024-11-19 08:39:45.381325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.184 [2024-11-19 08:39:45.381336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:06.184 [2024-11-19 08:39:45.381352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:21:06.184 [2024-11-19 08:39:45.381364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.184 [2024-11-19 08:39:45.397997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.184 [2024-11-19 08:39:45.398169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:06.184 [2024-11-19 08:39:45.398205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.569 ms 00:21:06.184 [2024-11-19 08:39:45.398219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.184 [2024-11-19 08:39:45.398689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.184 [2024-11-19 08:39:45.398716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:06.184 [2024-11-19 08:39:45.398734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:21:06.184 [2024-11-19 08:39:45.398746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.184 [2024-11-19 08:39:45.445507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.184 [2024-11-19 08:39:45.445751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:06.184 [2024-11-19 08:39:45.445792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.184 [2024-11-19 08:39:45.445807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.184 [2024-11-19 08:39:45.445894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.184 [2024-11-19 08:39:45.445911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:06.184 [2024-11-19 08:39:45.445926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.184 [2024-11-19 08:39:45.445937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.184 [2024-11-19 08:39:45.446072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.184 [2024-11-19 08:39:45.446096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:06.184 [2024-11-19 08:39:45.446112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.184 [2024-11-19 08:39:45.446124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.184 [2024-11-19 08:39:45.446151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.184 [2024-11-19 08:39:45.446166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:06.184 [2024-11-19 08:39:45.446180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.184 [2024-11-19 08:39:45.446191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.442 [2024-11-19 08:39:45.550408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.442 [2024-11-19 08:39:45.550475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:06.442 [2024-11-19 08:39:45.550500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.442 [2024-11-19 08:39:45.550513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.442 [2024-11-19 08:39:45.635263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.442 [2024-11-19 08:39:45.635350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:06.443 [2024-11-19 08:39:45.635390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.443 [2024-11-19 08:39:45.635403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.443 [2024-11-19 08:39:45.635554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.443 [2024-11-19 08:39:45.635575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:06.443 [2024-11-19 08:39:45.635595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.443 [2024-11-19 08:39:45.635631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.443 [2024-11-19 08:39:45.635726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.443 [2024-11-19 08:39:45.635746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:06.443 [2024-11-19 08:39:45.635761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.443 [2024-11-19 08:39:45.635773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.443 [2024-11-19 08:39:45.635905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.443 [2024-11-19 08:39:45.635925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:06.443 [2024-11-19 08:39:45.635947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.443 [2024-11-19 08:39:45.635959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.443 [2024-11-19 08:39:45.636016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.443 [2024-11-19 08:39:45.636035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:06.443 [2024-11-19 08:39:45.636050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.443 [2024-11-19 08:39:45.636061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.443 [2024-11-19 08:39:45.636110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.443 [2024-11-19 08:39:45.636132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:06.443 [2024-11-19 08:39:45.636148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.443 [2024-11-19 08:39:45.636163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.443 [2024-11-19 08:39:45.636223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:06.443 [2024-11-19 08:39:45.636252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:06.443 [2024-11-19 08:39:45.636291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:06.443 [2024-11-19 08:39:45.636303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.443 [2024-11-19 08:39:45.636463] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 625.153 ms, result 0 00:21:06.443 true 00:21:06.443 08:39:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75403 00:21:06.443 08:39:45 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75403 ']' 00:21:06.443 08:39:45 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75403 00:21:06.443 08:39:45 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:21:06.443 08:39:45 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.443 08:39:45 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75403 00:21:06.443 killing process with pid 75403 00:21:06.443 Received shutdown signal, test time was about 4.000000 seconds 00:21:06.443 00:21:06.443 Latency(us) 00:21:06.443 [2024-11-19T08:39:45.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.443 [2024-11-19T08:39:45.739Z] =================================================================================================================== 00:21:06.443 [2024-11-19T08:39:45.739Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:06.443 08:39:45 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:06.443 08:39:45 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:06.443 08:39:45 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75403' 00:21:06.443 08:39:45 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75403 00:21:06.443 08:39:45 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75403 00:21:09.726 Remove shared memory files 00:21:09.726 08:39:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:09.726 08:39:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:21:09.726 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:09.726 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:09.726 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:09.726 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:09.726 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:09.726 08:39:49 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:09.726 00:21:09.726 real 0m25.004s 00:21:09.726 user 0m29.085s 00:21:09.726 sys 0m1.140s 00:21:09.726 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.984 ************************************ 00:21:09.984 END TEST ftl_bdevperf 00:21:09.984 ************************************ 00:21:09.984 08:39:49 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:09.984 08:39:49 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:09.984 08:39:49 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:09.984 08:39:49 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.984 08:39:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:09.984 ************************************ 00:21:09.984 START TEST ftl_trim 00:21:09.984 ************************************ 00:21:09.984 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:09.984 * Looking for test storage... 00:21:09.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:09.984 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:09.984 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:21:09.984 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:09.984 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:09.984 08:39:49 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:21:09.984 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.984 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:09.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.984 --rc genhtml_branch_coverage=1 00:21:09.984 --rc genhtml_function_coverage=1 00:21:09.984 --rc genhtml_legend=1 00:21:09.984 --rc geninfo_all_blocks=1 00:21:09.984 --rc geninfo_unexecuted_blocks=1 00:21:09.984 00:21:09.984 ' 00:21:09.984 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:09.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.984 --rc genhtml_branch_coverage=1 00:21:09.984 --rc genhtml_function_coverage=1 00:21:09.984 --rc genhtml_legend=1 00:21:09.984 --rc geninfo_all_blocks=1 00:21:09.984 --rc geninfo_unexecuted_blocks=1 00:21:09.984 00:21:09.984 ' 00:21:09.984 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:09.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.984 --rc genhtml_branch_coverage=1 00:21:09.984 --rc genhtml_function_coverage=1 00:21:09.984 --rc genhtml_legend=1 00:21:09.984 --rc geninfo_all_blocks=1 00:21:09.984 --rc geninfo_unexecuted_blocks=1 00:21:09.984 00:21:09.984 ' 00:21:09.984 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:09.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.984 --rc genhtml_branch_coverage=1 00:21:09.984 --rc genhtml_function_coverage=1 00:21:09.984 --rc genhtml_legend=1 00:21:09.984 --rc geninfo_all_blocks=1 00:21:09.984 --rc geninfo_unexecuted_blocks=1 00:21:09.984 00:21:09.984 ' 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:09.984 08:39:49 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:09.985 08:39:49 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:09.985 08:39:49 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:09.985 08:39:49 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:09.985 08:39:49 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:09.985 08:39:49 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:09.985 08:39:49 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:09.985 08:39:49 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:09.985 08:39:49 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:09.985 08:39:49 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:09.985 08:39:49 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:09.985 08:39:49 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75756 00:21:09.985 08:39:49 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:09.985 08:39:49 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75756 00:21:09.985 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 75756 ']' 00:21:09.985 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.985 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.985 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.985 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.985 08:39:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:10.242 [2024-11-19 08:39:49.372235] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:10.242 [2024-11-19 08:39:49.372548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75756 ] 00:21:10.500 [2024-11-19 08:39:49.550526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:10.501 [2024-11-19 08:39:49.680505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.501 [2024-11-19 08:39:49.680600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.501 [2024-11-19 08:39:49.680632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.437 08:39:50 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.437 08:39:50 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:11.437 08:39:50 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:11.437 08:39:50 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:11.437 08:39:50 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:11.437 08:39:50 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:11.437 08:39:50 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:11.437 08:39:50 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:11.695 08:39:50 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:11.695 08:39:50 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:11.695 08:39:50 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:11.695 08:39:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:11.695 08:39:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:11.695 08:39:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:11.695 08:39:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:11.695 08:39:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:11.954 08:39:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:11.954 { 00:21:11.954 "name": "nvme0n1", 00:21:11.954 "aliases": [ 00:21:11.954 "fb3339ab-8f04-4dbe-ba0a-c9d5d7ad5429" 00:21:11.954 ], 00:21:11.954 "product_name": "NVMe disk", 00:21:11.954 "block_size": 4096, 00:21:11.954 "num_blocks": 1310720, 00:21:11.954 "uuid": "fb3339ab-8f04-4dbe-ba0a-c9d5d7ad5429", 00:21:11.954 "numa_id": -1, 00:21:11.954 "assigned_rate_limits": { 00:21:11.954 "rw_ios_per_sec": 0, 00:21:11.954 "rw_mbytes_per_sec": 0, 00:21:11.954 "r_mbytes_per_sec": 0, 00:21:11.954 "w_mbytes_per_sec": 0 00:21:11.954 }, 00:21:11.954 "claimed": true, 00:21:11.954 "claim_type": "read_many_write_one", 00:21:11.954 "zoned": false, 00:21:11.954 "supported_io_types": { 00:21:11.954 "read": true, 00:21:11.954 "write": true, 00:21:11.954 "unmap": true, 00:21:11.954 "flush": true, 00:21:11.954 "reset": true, 00:21:11.954 "nvme_admin": true, 00:21:11.954 "nvme_io": true, 00:21:11.954 "nvme_io_md": false, 00:21:11.954 "write_zeroes": true, 00:21:11.954 "zcopy": false, 00:21:11.954 "get_zone_info": false, 00:21:11.954 "zone_management": false, 00:21:11.954 "zone_append": false, 00:21:11.954 "compare": true, 00:21:11.954 "compare_and_write": false, 00:21:11.954 "abort": true, 00:21:11.954 "seek_hole": false, 00:21:11.954 "seek_data": false, 00:21:11.954 "copy": true, 00:21:11.954 "nvme_iov_md": false 00:21:11.954 }, 00:21:11.954 "driver_specific": { 00:21:11.954 "nvme": [ 00:21:11.954 { 00:21:11.954 "pci_address": "0000:00:11.0", 00:21:11.954 "trid": { 00:21:11.954 "trtype": "PCIe", 00:21:11.954 "traddr": "0000:00:11.0" 00:21:11.954 }, 00:21:11.954 "ctrlr_data": { 00:21:11.954 "cntlid": 0, 00:21:11.954 "vendor_id": "0x1b36", 00:21:11.954 "model_number": "QEMU NVMe Ctrl", 00:21:11.954 "serial_number": "12341", 00:21:11.954 "firmware_revision": "8.0.0", 00:21:11.954 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:11.954 "oacs": { 00:21:11.954 "security": 0, 00:21:11.954 "format": 1, 00:21:11.954 "firmware": 0, 00:21:11.954 "ns_manage": 1 00:21:11.954 }, 00:21:11.954 "multi_ctrlr": false, 00:21:11.954 "ana_reporting": false 00:21:11.954 }, 00:21:11.954 "vs": { 00:21:11.954 "nvme_version": "1.4" 00:21:11.954 }, 00:21:11.954 "ns_data": { 00:21:11.954 "id": 1, 00:21:11.954 "can_share": false 00:21:11.954 } 00:21:11.954 } 00:21:11.954 ], 00:21:11.954 "mp_policy": "active_passive" 00:21:11.954 } 00:21:11.954 } 00:21:11.954 ]' 00:21:11.954 08:39:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:11.954 08:39:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:11.954 08:39:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:11.954 08:39:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:11.954 08:39:51 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:11.954 08:39:51 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:21:11.954 08:39:51 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:11.954 08:39:51 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:11.954 08:39:51 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:11.954 08:39:51 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:11.954 08:39:51 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:12.212 08:39:51 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=878ff754-395e-4762-b954-26761ad78b72 00:21:12.212 08:39:51 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:12.212 08:39:51 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 878ff754-395e-4762-b954-26761ad78b72 00:21:12.470 08:39:51 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:13.036 08:39:52 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=a9e5c0c0-16ef-432f-be64-f8bd6a8fe532 00:21:13.036 08:39:52 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a9e5c0c0-16ef-432f-be64-f8bd6a8fe532 00:21:13.036 08:39:52 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=041dac0b-c405-4e40-8f17-ebe953fe98a6 00:21:13.036 08:39:52 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 041dac0b-c405-4e40-8f17-ebe953fe98a6 00:21:13.036 08:39:52 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:13.036 08:39:52 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:13.036 08:39:52 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=041dac0b-c405-4e40-8f17-ebe953fe98a6 00:21:13.036 08:39:52 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:13.036 08:39:52 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 041dac0b-c405-4e40-8f17-ebe953fe98a6 00:21:13.036 08:39:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=041dac0b-c405-4e40-8f17-ebe953fe98a6 00:21:13.036 08:39:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:13.036 08:39:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:13.036 08:39:52 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:13.036 08:39:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 041dac0b-c405-4e40-8f17-ebe953fe98a6 00:21:13.636 08:39:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:13.636 { 00:21:13.636 "name": "041dac0b-c405-4e40-8f17-ebe953fe98a6", 00:21:13.636 "aliases": [ 00:21:13.636 "lvs/nvme0n1p0" 00:21:13.636 ], 00:21:13.636 "product_name": "Logical Volume", 00:21:13.636 "block_size": 4096, 00:21:13.636 "num_blocks": 26476544, 00:21:13.636 "uuid": "041dac0b-c405-4e40-8f17-ebe953fe98a6", 00:21:13.636 "assigned_rate_limits": { 00:21:13.636 "rw_ios_per_sec": 0, 00:21:13.636 "rw_mbytes_per_sec": 0, 00:21:13.636 "r_mbytes_per_sec": 0, 00:21:13.636 "w_mbytes_per_sec": 0 00:21:13.636 }, 00:21:13.636 "claimed": false, 00:21:13.636 "zoned": false, 00:21:13.636 "supported_io_types": { 00:21:13.636 "read": true, 00:21:13.636 "write": true, 00:21:13.636 "unmap": true, 00:21:13.636 "flush": false, 00:21:13.636 "reset": true, 00:21:13.636 "nvme_admin": false, 00:21:13.636 "nvme_io": false, 00:21:13.636 "nvme_io_md": false, 00:21:13.636 "write_zeroes": true, 00:21:13.636 "zcopy": false, 00:21:13.636 "get_zone_info": false, 00:21:13.636 "zone_management": false, 00:21:13.636 "zone_append": false, 00:21:13.636 "compare": false, 00:21:13.636 "compare_and_write": false, 00:21:13.636 "abort": false, 00:21:13.636 "seek_hole": true, 00:21:13.636 "seek_data": true, 00:21:13.636 "copy": false, 00:21:13.636 "nvme_iov_md": false 00:21:13.636 }, 00:21:13.636 "driver_specific": { 00:21:13.636 "lvol": { 00:21:13.636 "lvol_store_uuid": "a9e5c0c0-16ef-432f-be64-f8bd6a8fe532", 00:21:13.636 "base_bdev": "nvme0n1", 00:21:13.636 "thin_provision": true, 00:21:13.636 "num_allocated_clusters": 0, 00:21:13.636 "snapshot": false, 00:21:13.636 "clone": false, 00:21:13.636 "esnap_clone": false 00:21:13.636 } 00:21:13.636 } 00:21:13.636 } 00:21:13.636 ]' 00:21:13.636 08:39:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:13.636 08:39:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:13.636 08:39:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:13.636 08:39:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:13.636 08:39:52 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:13.636 08:39:52 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:13.636 08:39:52 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:13.636 08:39:52 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:13.636 08:39:52 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:13.894 08:39:53 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:13.894 08:39:53 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:13.894 08:39:53 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 041dac0b-c405-4e40-8f17-ebe953fe98a6 00:21:13.894 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=041dac0b-c405-4e40-8f17-ebe953fe98a6 00:21:13.895 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:13.895 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:13.895 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:13.895 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 041dac0b-c405-4e40-8f17-ebe953fe98a6 00:21:14.153 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:14.153 { 00:21:14.153 "name": "041dac0b-c405-4e40-8f17-ebe953fe98a6", 00:21:14.153 "aliases": [ 00:21:14.153 "lvs/nvme0n1p0" 00:21:14.153 ], 00:21:14.153 "product_name": "Logical Volume", 00:21:14.153 "block_size": 4096, 00:21:14.153 "num_blocks": 26476544, 00:21:14.153 "uuid": "041dac0b-c405-4e40-8f17-ebe953fe98a6", 00:21:14.153 "assigned_rate_limits": { 00:21:14.153 "rw_ios_per_sec": 0, 00:21:14.153 "rw_mbytes_per_sec": 0, 00:21:14.153 "r_mbytes_per_sec": 0, 00:21:14.153 "w_mbytes_per_sec": 0 00:21:14.153 }, 00:21:14.153 "claimed": false, 00:21:14.153 "zoned": false, 00:21:14.153 "supported_io_types": { 00:21:14.153 "read": true, 00:21:14.153 "write": true, 00:21:14.153 "unmap": true, 00:21:14.153 "flush": false, 00:21:14.153 "reset": true, 00:21:14.153 "nvme_admin": false, 00:21:14.153 "nvme_io": false, 00:21:14.153 "nvme_io_md": false, 00:21:14.153 "write_zeroes": true, 00:21:14.153 "zcopy": false, 00:21:14.153 "get_zone_info": false, 00:21:14.153 "zone_management": false, 00:21:14.153 "zone_append": false, 00:21:14.153 "compare": false, 00:21:14.153 "compare_and_write": false, 00:21:14.153 "abort": false, 00:21:14.153 "seek_hole": true, 00:21:14.153 "seek_data": true, 00:21:14.153 "copy": false, 00:21:14.153 "nvme_iov_md": false 00:21:14.153 }, 00:21:14.153 "driver_specific": { 00:21:14.153 "lvol": { 00:21:14.153 "lvol_store_uuid": "a9e5c0c0-16ef-432f-be64-f8bd6a8fe532", 00:21:14.153 "base_bdev": "nvme0n1", 00:21:14.153 "thin_provision": true, 00:21:14.153 "num_allocated_clusters": 0, 00:21:14.153 "snapshot": false, 00:21:14.153 "clone": false, 00:21:14.153 "esnap_clone": false 00:21:14.153 } 00:21:14.154 } 00:21:14.154 } 00:21:14.154 ]' 00:21:14.154 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:14.154 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:14.154 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:14.154 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:14.154 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:14.154 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:14.154 08:39:53 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:14.154 08:39:53 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:14.412 08:39:53 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:14.412 08:39:53 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:14.412 08:39:53 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 041dac0b-c405-4e40-8f17-ebe953fe98a6 00:21:14.412 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=041dac0b-c405-4e40-8f17-ebe953fe98a6 00:21:14.412 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:14.412 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:14.412 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:14.412 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 041dac0b-c405-4e40-8f17-ebe953fe98a6 00:21:14.979 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:14.979 { 00:21:14.979 "name": "041dac0b-c405-4e40-8f17-ebe953fe98a6", 00:21:14.979 "aliases": [ 00:21:14.979 "lvs/nvme0n1p0" 00:21:14.979 ], 00:21:14.979 "product_name": "Logical Volume", 00:21:14.979 "block_size": 4096, 00:21:14.979 "num_blocks": 26476544, 00:21:14.979 "uuid": "041dac0b-c405-4e40-8f17-ebe953fe98a6", 00:21:14.979 "assigned_rate_limits": { 00:21:14.979 "rw_ios_per_sec": 0, 00:21:14.979 "rw_mbytes_per_sec": 0, 00:21:14.979 "r_mbytes_per_sec": 0, 00:21:14.979 "w_mbytes_per_sec": 0 00:21:14.979 }, 00:21:14.979 "claimed": false, 00:21:14.979 "zoned": false, 00:21:14.979 "supported_io_types": { 00:21:14.979 "read": true, 00:21:14.979 "write": true, 00:21:14.979 "unmap": true, 00:21:14.979 "flush": false, 00:21:14.979 "reset": true, 00:21:14.979 "nvme_admin": false, 00:21:14.979 "nvme_io": false, 00:21:14.979 "nvme_io_md": false, 00:21:14.979 "write_zeroes": true, 00:21:14.979 "zcopy": false, 00:21:14.979 "get_zone_info": false, 00:21:14.979 "zone_management": false, 00:21:14.979 "zone_append": false, 00:21:14.979 "compare": false, 00:21:14.979 "compare_and_write": false, 00:21:14.979 "abort": false, 00:21:14.979 "seek_hole": true, 00:21:14.979 "seek_data": true, 00:21:14.979 "copy": false, 00:21:14.979 "nvme_iov_md": false 00:21:14.979 }, 00:21:14.979 "driver_specific": { 00:21:14.979 "lvol": { 00:21:14.979 "lvol_store_uuid": "a9e5c0c0-16ef-432f-be64-f8bd6a8fe532", 00:21:14.979 "base_bdev": "nvme0n1", 00:21:14.979 "thin_provision": true, 00:21:14.979 "num_allocated_clusters": 0, 00:21:14.979 "snapshot": false, 00:21:14.979 "clone": false, 00:21:14.979 "esnap_clone": false 00:21:14.979 } 00:21:14.979 } 00:21:14.979 } 00:21:14.979 ]' 00:21:14.979 08:39:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:14.979 08:39:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:14.979 08:39:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:14.979 08:39:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:14.979 08:39:54 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:14.979 08:39:54 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:14.979 08:39:54 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:14.979 08:39:54 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 041dac0b-c405-4e40-8f17-ebe953fe98a6 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:15.239 [2024-11-19 08:39:54.297740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.239 [2024-11-19 08:39:54.297799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:15.239 [2024-11-19 08:39:54.297828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:15.239 [2024-11-19 08:39:54.297843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.239 [2024-11-19 08:39:54.301369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.239 [2024-11-19 08:39:54.301417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:15.239 [2024-11-19 08:39:54.301440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.486 ms 00:21:15.239 [2024-11-19 08:39:54.301454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.239 [2024-11-19 08:39:54.301602] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:15.239 [2024-11-19 08:39:54.302586] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:15.239 [2024-11-19 08:39:54.302649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.239 [2024-11-19 08:39:54.302668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:15.239 [2024-11-19 08:39:54.302684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:21:15.239 [2024-11-19 08:39:54.302697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.239 [2024-11-19 08:39:54.302944] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e7b4f20c-cbe0-42b7-a2fd-ee64b862d20d 00:21:15.239 [2024-11-19 08:39:54.304002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.239 [2024-11-19 08:39:54.304047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:15.239 [2024-11-19 08:39:54.304082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:15.239 [2024-11-19 08:39:54.304098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.239 [2024-11-19 08:39:54.308699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.239 [2024-11-19 08:39:54.308753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:15.239 [2024-11-19 08:39:54.308776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.508 ms 00:21:15.239 [2024-11-19 08:39:54.308796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.239 [2024-11-19 08:39:54.308994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.239 [2024-11-19 08:39:54.309021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:15.239 [2024-11-19 08:39:54.309036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:15.239 [2024-11-19 08:39:54.309057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.239 [2024-11-19 08:39:54.309108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.239 [2024-11-19 08:39:54.309127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:15.239 [2024-11-19 08:39:54.309141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:15.239 [2024-11-19 08:39:54.309157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.239 [2024-11-19 08:39:54.309208] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:15.239 [2024-11-19 08:39:54.313832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.239 [2024-11-19 08:39:54.314019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:15.239 [2024-11-19 08:39:54.314063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.632 ms 00:21:15.239 [2024-11-19 08:39:54.314078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.239 [2024-11-19 08:39:54.314174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.239 [2024-11-19 08:39:54.314194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:15.239 [2024-11-19 08:39:54.314211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:15.239 [2024-11-19 08:39:54.314256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.239 [2024-11-19 08:39:54.314302] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:15.239 [2024-11-19 08:39:54.314462] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:15.239 [2024-11-19 08:39:54.314488] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:15.239 [2024-11-19 08:39:54.314505] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:15.239 [2024-11-19 08:39:54.314523] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:15.239 [2024-11-19 08:39:54.314539] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:15.239 [2024-11-19 08:39:54.314555] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:15.239 [2024-11-19 08:39:54.314568] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:15.239 [2024-11-19 08:39:54.314582] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:15.239 [2024-11-19 08:39:54.314597] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:15.239 [2024-11-19 08:39:54.314641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.239 [2024-11-19 08:39:54.314658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:15.239 [2024-11-19 08:39:54.314675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:21:15.239 [2024-11-19 08:39:54.314688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.239 [2024-11-19 08:39:54.314802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.239 [2024-11-19 08:39:54.314819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:15.239 [2024-11-19 08:39:54.314836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:15.239 [2024-11-19 08:39:54.314849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.239 [2024-11-19 08:39:54.315011] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:15.239 [2024-11-19 08:39:54.315036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:15.239 [2024-11-19 08:39:54.315053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:15.239 [2024-11-19 08:39:54.315066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.239 [2024-11-19 08:39:54.315082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:15.239 [2024-11-19 08:39:54.315094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:15.239 [2024-11-19 08:39:54.315109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:15.239 [2024-11-19 08:39:54.315121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:15.239 [2024-11-19 08:39:54.315135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:15.239 [2024-11-19 08:39:54.315147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:15.239 [2024-11-19 08:39:54.315161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:15.239 [2024-11-19 08:39:54.315173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:15.239 [2024-11-19 08:39:54.315187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:15.239 [2024-11-19 08:39:54.315199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:15.239 [2024-11-19 08:39:54.315213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:15.239 [2024-11-19 08:39:54.315225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.239 [2024-11-19 08:39:54.315241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:15.239 [2024-11-19 08:39:54.315253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:15.239 [2024-11-19 08:39:54.315267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.239 [2024-11-19 08:39:54.315279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:15.239 [2024-11-19 08:39:54.315296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:15.239 [2024-11-19 08:39:54.315308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.239 [2024-11-19 08:39:54.315330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:15.239 [2024-11-19 08:39:54.315342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:15.239 [2024-11-19 08:39:54.315357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.239 [2024-11-19 08:39:54.315369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:15.239 [2024-11-19 08:39:54.315383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:15.239 [2024-11-19 08:39:54.315395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.239 [2024-11-19 08:39:54.315408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:15.239 [2024-11-19 08:39:54.315420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:15.239 [2024-11-19 08:39:54.315435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:15.239 [2024-11-19 08:39:54.315446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:15.239 [2024-11-19 08:39:54.315462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:15.239 [2024-11-19 08:39:54.315474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:15.240 [2024-11-19 08:39:54.315502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:15.240 [2024-11-19 08:39:54.315515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:15.240 [2024-11-19 08:39:54.315529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:15.240 [2024-11-19 08:39:54.315541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:15.240 [2024-11-19 08:39:54.315555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:15.240 [2024-11-19 08:39:54.315567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.240 [2024-11-19 08:39:54.315581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:15.240 [2024-11-19 08:39:54.315593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:15.240 [2024-11-19 08:39:54.315621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.240 [2024-11-19 08:39:54.315636] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:15.240 [2024-11-19 08:39:54.315651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:15.240 [2024-11-19 08:39:54.315664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:15.240 [2024-11-19 08:39:54.315679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:15.240 [2024-11-19 08:39:54.315691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:15.240 [2024-11-19 08:39:54.315710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:15.240 [2024-11-19 08:39:54.315722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:15.240 [2024-11-19 08:39:54.315736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:15.240 [2024-11-19 08:39:54.315748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:15.240 [2024-11-19 08:39:54.315763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:15.240 [2024-11-19 08:39:54.315781] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:15.240 [2024-11-19 08:39:54.315801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:15.240 [2024-11-19 08:39:54.315816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:15.240 [2024-11-19 08:39:54.315832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:15.240 [2024-11-19 08:39:54.315844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:15.240 [2024-11-19 08:39:54.315859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:15.240 [2024-11-19 08:39:54.315871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:15.240 [2024-11-19 08:39:54.315886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:15.240 [2024-11-19 08:39:54.315899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:15.240 [2024-11-19 08:39:54.315913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:15.240 [2024-11-19 08:39:54.315925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:15.240 [2024-11-19 08:39:54.315941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:15.240 [2024-11-19 08:39:54.315954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:15.240 [2024-11-19 08:39:54.315968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:15.240 [2024-11-19 08:39:54.315981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:15.240 [2024-11-19 08:39:54.315996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:15.240 [2024-11-19 08:39:54.316008] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:15.240 [2024-11-19 08:39:54.316034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:15.240 [2024-11-19 08:39:54.316048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:15.240 [2024-11-19 08:39:54.316062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:15.240 [2024-11-19 08:39:54.316075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:15.240 [2024-11-19 08:39:54.316090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:15.240 [2024-11-19 08:39:54.316104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.240 [2024-11-19 08:39:54.316119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:15.240 [2024-11-19 08:39:54.316133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:21:15.240 [2024-11-19 08:39:54.316147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.240 [2024-11-19 08:39:54.316246] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:15.240 [2024-11-19 08:39:54.316272] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:17.139 [2024-11-19 08:39:56.352672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.139 [2024-11-19 08:39:56.352754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:17.139 [2024-11-19 08:39:56.352779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2036.436 ms 00:21:17.139 [2024-11-19 08:39:56.352796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.139 [2024-11-19 08:39:56.385714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.139 [2024-11-19 08:39:56.385789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:17.139 [2024-11-19 08:39:56.385812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.579 ms 00:21:17.139 [2024-11-19 08:39:56.385829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.139 [2024-11-19 08:39:56.386050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.139 [2024-11-19 08:39:56.386075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:17.139 [2024-11-19 08:39:56.386091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:17.139 [2024-11-19 08:39:56.386109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.399 [2024-11-19 08:39:56.441806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.399 [2024-11-19 08:39:56.441900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:17.399 [2024-11-19 08:39:56.441933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.622 ms 00:21:17.399 [2024-11-19 08:39:56.441957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.399 [2024-11-19 08:39:56.442133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.399 [2024-11-19 08:39:56.442169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:17.399 [2024-11-19 08:39:56.442191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:17.399 [2024-11-19 08:39:56.442215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.399 [2024-11-19 08:39:56.442681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.399 [2024-11-19 08:39:56.442735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:17.399 [2024-11-19 08:39:56.442759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:21:17.399 [2024-11-19 08:39:56.442781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.399 [2024-11-19 08:39:56.443001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.399 [2024-11-19 08:39:56.443037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:17.399 [2024-11-19 08:39:56.443057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:21:17.399 [2024-11-19 08:39:56.443082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.399 [2024-11-19 08:39:56.461561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.399 [2024-11-19 08:39:56.461651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:17.399 [2024-11-19 08:39:56.461685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.385 ms 00:21:17.399 [2024-11-19 08:39:56.461712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.399 [2024-11-19 08:39:56.475383] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:17.399 [2024-11-19 08:39:56.490083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.399 [2024-11-19 08:39:56.490176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:17.399 [2024-11-19 08:39:56.490204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.142 ms 00:21:17.399 [2024-11-19 08:39:56.490219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.399 [2024-11-19 08:39:56.555599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.399 [2024-11-19 08:39:56.555693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:17.399 [2024-11-19 08:39:56.555721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.206 ms 00:21:17.399 [2024-11-19 08:39:56.555737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.399 [2024-11-19 08:39:56.556010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.399 [2024-11-19 08:39:56.556032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:17.399 [2024-11-19 08:39:56.556052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:21:17.399 [2024-11-19 08:39:56.556066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.399 [2024-11-19 08:39:56.587957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.399 [2024-11-19 08:39:56.588006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:17.399 [2024-11-19 08:39:56.588030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.842 ms 00:21:17.399 [2024-11-19 08:39:56.588044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.399 [2024-11-19 08:39:56.619415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.399 [2024-11-19 08:39:56.619465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:17.399 [2024-11-19 08:39:56.619498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.274 ms 00:21:17.399 [2024-11-19 08:39:56.619512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.399 [2024-11-19 08:39:56.620339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.399 [2024-11-19 08:39:56.620388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:17.399 [2024-11-19 08:39:56.620410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:21:17.399 [2024-11-19 08:39:56.620424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.658 [2024-11-19 08:39:56.704674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.658 [2024-11-19 08:39:56.704739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:17.658 [2024-11-19 08:39:56.704774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.201 ms 00:21:17.658 [2024-11-19 08:39:56.704788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.658 [2024-11-19 08:39:56.738052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.658 [2024-11-19 08:39:56.738106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:17.658 [2024-11-19 08:39:56.738147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.116 ms 00:21:17.658 [2024-11-19 08:39:56.738162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.658 [2024-11-19 08:39:56.770592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.658 [2024-11-19 08:39:56.770654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:17.658 [2024-11-19 08:39:56.770679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.309 ms 00:21:17.658 [2024-11-19 08:39:56.770692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.658 [2024-11-19 08:39:56.802669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.658 [2024-11-19 08:39:56.802721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:17.658 [2024-11-19 08:39:56.802745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.866 ms 00:21:17.658 [2024-11-19 08:39:56.802780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.658 [2024-11-19 08:39:56.802935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.658 [2024-11-19 08:39:56.802961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:17.658 [2024-11-19 08:39:56.802982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:17.658 [2024-11-19 08:39:56.802995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.658 [2024-11-19 08:39:56.803101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.658 [2024-11-19 08:39:56.803119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:17.658 [2024-11-19 08:39:56.803136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:17.658 [2024-11-19 08:39:56.803148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.658 [2024-11-19 08:39:56.804408] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:17.658 [2024-11-19 08:39:56.808598] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2506.170 ms, result 0 00:21:17.658 [2024-11-19 08:39:56.809459] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:17.658 { 00:21:17.658 "name": "ftl0", 00:21:17.658 "uuid": "e7b4f20c-cbe0-42b7-a2fd-ee64b862d20d" 00:21:17.658 } 00:21:17.658 08:39:56 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:17.659 08:39:56 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:17.659 08:39:56 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:17.659 08:39:56 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:21:17.659 08:39:56 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:17.659 08:39:56 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:17.659 08:39:56 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:17.916 08:39:57 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:18.175 [ 00:21:18.175 { 00:21:18.175 "name": "ftl0", 00:21:18.175 "aliases": [ 00:21:18.175 "e7b4f20c-cbe0-42b7-a2fd-ee64b862d20d" 00:21:18.175 ], 00:21:18.175 "product_name": "FTL disk", 00:21:18.175 "block_size": 4096, 00:21:18.175 "num_blocks": 23592960, 00:21:18.175 "uuid": "e7b4f20c-cbe0-42b7-a2fd-ee64b862d20d", 00:21:18.175 "assigned_rate_limits": { 00:21:18.175 "rw_ios_per_sec": 0, 00:21:18.175 "rw_mbytes_per_sec": 0, 00:21:18.175 "r_mbytes_per_sec": 0, 00:21:18.175 "w_mbytes_per_sec": 0 00:21:18.175 }, 00:21:18.175 "claimed": false, 00:21:18.175 "zoned": false, 00:21:18.175 "supported_io_types": { 00:21:18.175 "read": true, 00:21:18.175 "write": true, 00:21:18.175 "unmap": true, 00:21:18.175 "flush": true, 00:21:18.175 "reset": false, 00:21:18.175 "nvme_admin": false, 00:21:18.175 "nvme_io": false, 00:21:18.175 "nvme_io_md": false, 00:21:18.175 "write_zeroes": true, 00:21:18.175 "zcopy": false, 00:21:18.175 "get_zone_info": false, 00:21:18.175 "zone_management": false, 00:21:18.175 "zone_append": false, 00:21:18.175 "compare": false, 00:21:18.175 "compare_and_write": false, 00:21:18.175 "abort": false, 00:21:18.175 "seek_hole": false, 00:21:18.175 "seek_data": false, 00:21:18.175 "copy": false, 00:21:18.175 "nvme_iov_md": false 00:21:18.175 }, 00:21:18.175 "driver_specific": { 00:21:18.175 "ftl": { 00:21:18.175 "base_bdev": "041dac0b-c405-4e40-8f17-ebe953fe98a6", 00:21:18.175 "cache": "nvc0n1p0" 00:21:18.175 } 00:21:18.175 } 00:21:18.175 } 00:21:18.175 ] 00:21:18.175 08:39:57 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:21:18.175 08:39:57 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:18.175 08:39:57 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:18.742 08:39:57 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:18.742 08:39:57 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:19.001 08:39:58 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:19.001 { 00:21:19.001 "name": "ftl0", 00:21:19.001 "aliases": [ 00:21:19.001 "e7b4f20c-cbe0-42b7-a2fd-ee64b862d20d" 00:21:19.002 ], 00:21:19.002 "product_name": "FTL disk", 00:21:19.002 "block_size": 4096, 00:21:19.002 "num_blocks": 23592960, 00:21:19.002 "uuid": "e7b4f20c-cbe0-42b7-a2fd-ee64b862d20d", 00:21:19.002 "assigned_rate_limits": { 00:21:19.002 "rw_ios_per_sec": 0, 00:21:19.002 "rw_mbytes_per_sec": 0, 00:21:19.002 "r_mbytes_per_sec": 0, 00:21:19.002 "w_mbytes_per_sec": 0 00:21:19.002 }, 00:21:19.002 "claimed": false, 00:21:19.002 "zoned": false, 00:21:19.002 "supported_io_types": { 00:21:19.002 "read": true, 00:21:19.002 "write": true, 00:21:19.002 "unmap": true, 00:21:19.002 "flush": true, 00:21:19.002 "reset": false, 00:21:19.002 "nvme_admin": false, 00:21:19.002 "nvme_io": false, 00:21:19.002 "nvme_io_md": false, 00:21:19.002 "write_zeroes": true, 00:21:19.002 "zcopy": false, 00:21:19.002 "get_zone_info": false, 00:21:19.002 "zone_management": false, 00:21:19.002 "zone_append": false, 00:21:19.002 "compare": false, 00:21:19.002 "compare_and_write": false, 00:21:19.002 "abort": false, 00:21:19.002 "seek_hole": false, 00:21:19.002 "seek_data": false, 00:21:19.002 "copy": false, 00:21:19.002 "nvme_iov_md": false 00:21:19.002 }, 00:21:19.002 "driver_specific": { 00:21:19.002 "ftl": { 00:21:19.002 "base_bdev": "041dac0b-c405-4e40-8f17-ebe953fe98a6", 00:21:19.002 "cache": "nvc0n1p0" 00:21:19.002 } 00:21:19.002 } 00:21:19.002 } 00:21:19.002 ]' 00:21:19.002 08:39:58 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:19.002 08:39:58 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:19.002 08:39:58 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:19.261 [2024-11-19 08:39:58.382026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.261 [2024-11-19 08:39:58.382094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:19.261 [2024-11-19 08:39:58.382122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:19.261 [2024-11-19 08:39:58.382144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.261 [2024-11-19 08:39:58.382190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:19.261 [2024-11-19 08:39:58.385616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.261 [2024-11-19 08:39:58.385662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:19.261 [2024-11-19 08:39:58.385687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.394 ms 00:21:19.261 [2024-11-19 08:39:58.385701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.261 [2024-11-19 08:39:58.386323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.261 [2024-11-19 08:39:58.386361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:19.261 [2024-11-19 08:39:58.386383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:21:19.261 [2024-11-19 08:39:58.386397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.261 [2024-11-19 08:39:58.390211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.261 [2024-11-19 08:39:58.390249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:19.261 [2024-11-19 08:39:58.390286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.771 ms 00:21:19.261 [2024-11-19 08:39:58.390300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.261 [2024-11-19 08:39:58.397994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.261 [2024-11-19 08:39:58.398034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:19.261 [2024-11-19 08:39:58.398072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.631 ms 00:21:19.261 [2024-11-19 08:39:58.398086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.261 [2024-11-19 08:39:58.429722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.261 [2024-11-19 08:39:58.429769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:19.261 [2024-11-19 08:39:58.429796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.522 ms 00:21:19.261 [2024-11-19 08:39:58.429810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.261 [2024-11-19 08:39:58.449164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.261 [2024-11-19 08:39:58.449218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:19.261 [2024-11-19 08:39:58.449243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.235 ms 00:21:19.261 [2024-11-19 08:39:58.449261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.261 [2024-11-19 08:39:58.449522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.261 [2024-11-19 08:39:58.449545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:19.261 [2024-11-19 08:39:58.449563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:21:19.261 [2024-11-19 08:39:58.449577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.261 [2024-11-19 08:39:58.481230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.261 [2024-11-19 08:39:58.481276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:19.261 [2024-11-19 08:39:58.481300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.579 ms 00:21:19.261 [2024-11-19 08:39:58.481314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.261 [2024-11-19 08:39:58.512680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.261 [2024-11-19 08:39:58.512725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:19.261 [2024-11-19 08:39:58.512766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.256 ms 00:21:19.261 [2024-11-19 08:39:58.512780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.261 [2024-11-19 08:39:58.543649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.261 [2024-11-19 08:39:58.543822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:19.261 [2024-11-19 08:39:58.543859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.757 ms 00:21:19.261 [2024-11-19 08:39:58.543874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.521 [2024-11-19 08:39:58.575214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.521 [2024-11-19 08:39:58.575397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:19.521 [2024-11-19 08:39:58.575434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.144 ms 00:21:19.521 [2024-11-19 08:39:58.575448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.521 [2024-11-19 08:39:58.575569] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:19.521 [2024-11-19 08:39:58.575597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:19.521 [2024-11-19 08:39:58.575971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.576993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.577009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.577022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.577037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.577051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.577066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.577080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.577095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:19.522 [2024-11-19 08:39:58.577108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:19.523 [2024-11-19 08:39:58.577124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:19.523 [2024-11-19 08:39:58.577137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:19.523 [2024-11-19 08:39:58.577153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:19.523 [2024-11-19 08:39:58.577166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:19.523 [2024-11-19 08:39:58.577184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:19.523 [2024-11-19 08:39:58.577207] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:19.523 [2024-11-19 08:39:58.577224] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e7b4f20c-cbe0-42b7-a2fd-ee64b862d20d 00:21:19.523 [2024-11-19 08:39:58.577238] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:19.523 [2024-11-19 08:39:58.577259] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:19.523 [2024-11-19 08:39:58.577274] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:19.523 [2024-11-19 08:39:58.577289] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:19.523 [2024-11-19 08:39:58.577304] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:19.523 [2024-11-19 08:39:58.577319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:19.523 [2024-11-19 08:39:58.577332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:19.523 [2024-11-19 08:39:58.577346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:19.523 [2024-11-19 08:39:58.577357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:19.523 [2024-11-19 08:39:58.577373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.523 [2024-11-19 08:39:58.577387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:19.523 [2024-11-19 08:39:58.577404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.809 ms 00:21:19.523 [2024-11-19 08:39:58.577417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.523 [2024-11-19 08:39:58.594287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.523 [2024-11-19 08:39:58.594331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:19.523 [2024-11-19 08:39:58.594361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.819 ms 00:21:19.523 [2024-11-19 08:39:58.594375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.523 [2024-11-19 08:39:58.594911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.523 [2024-11-19 08:39:58.594944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:19.523 [2024-11-19 08:39:58.594965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:21:19.523 [2024-11-19 08:39:58.594979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.523 [2024-11-19 08:39:58.653892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.523 [2024-11-19 08:39:58.653967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:19.523 [2024-11-19 08:39:58.653992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.523 [2024-11-19 08:39:58.654006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.523 [2024-11-19 08:39:58.654185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.523 [2024-11-19 08:39:58.654205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:19.523 [2024-11-19 08:39:58.654223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.523 [2024-11-19 08:39:58.654237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.523 [2024-11-19 08:39:58.654333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.523 [2024-11-19 08:39:58.654354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:19.523 [2024-11-19 08:39:58.654378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.523 [2024-11-19 08:39:58.654392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.523 [2024-11-19 08:39:58.654436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.523 [2024-11-19 08:39:58.654451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:19.523 [2024-11-19 08:39:58.654467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.523 [2024-11-19 08:39:58.654480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.523 [2024-11-19 08:39:58.764805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.523 [2024-11-19 08:39:58.765063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:19.523 [2024-11-19 08:39:58.765100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.523 [2024-11-19 08:39:58.765116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-19 08:39:58.850891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.782 [2024-11-19 08:39:58.850967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:19.782 [2024-11-19 08:39:58.850993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.782 [2024-11-19 08:39:58.851024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-19 08:39:58.851159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.782 [2024-11-19 08:39:58.851180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:19.782 [2024-11-19 08:39:58.851225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.782 [2024-11-19 08:39:58.851243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-19 08:39:58.851312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.782 [2024-11-19 08:39:58.851328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:19.782 [2024-11-19 08:39:58.851344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.782 [2024-11-19 08:39:58.851357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-19 08:39:58.851528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.782 [2024-11-19 08:39:58.851551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:19.782 [2024-11-19 08:39:58.851568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.782 [2024-11-19 08:39:58.851581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-19 08:39:58.851730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.782 [2024-11-19 08:39:58.851761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:19.782 [2024-11-19 08:39:58.851787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.782 [2024-11-19 08:39:58.851809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-19 08:39:58.851914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.782 [2024-11-19 08:39:58.851946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:19.782 [2024-11-19 08:39:58.851981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.782 [2024-11-19 08:39:58.852011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-19 08:39:58.852101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.782 [2024-11-19 08:39:58.852119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:19.782 [2024-11-19 08:39:58.852135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.782 [2024-11-19 08:39:58.852148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.782 [2024-11-19 08:39:58.852373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 470.329 ms, result 0 00:21:19.782 true 00:21:19.782 08:39:58 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75756 00:21:19.782 08:39:58 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 75756 ']' 00:21:19.782 08:39:58 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 75756 00:21:19.782 08:39:58 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:19.782 08:39:58 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.782 08:39:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75756 00:21:19.782 killing process with pid 75756 00:21:19.782 08:39:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:19.782 08:39:58 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:19.782 08:39:58 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75756' 00:21:19.782 08:39:58 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 75756 00:21:19.782 08:39:58 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 75756 00:21:25.046 08:40:03 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:25.306 65536+0 records in 00:21:25.306 65536+0 records out 00:21:25.306 268435456 bytes (268 MB, 256 MiB) copied, 1.17929 s, 228 MB/s 00:21:25.306 08:40:04 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:25.566 [2024-11-19 08:40:04.625128] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:25.566 [2024-11-19 08:40:04.625293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75955 ] 00:21:25.566 [2024-11-19 08:40:04.799432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.825 [2024-11-19 08:40:04.901736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.083 [2024-11-19 08:40:05.211860] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:26.083 [2024-11-19 08:40:05.212219] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:26.083 [2024-11-19 08:40:05.373498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.083 [2024-11-19 08:40:05.373554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:26.083 [2024-11-19 08:40:05.373575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:26.083 [2024-11-19 08:40:05.373587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.344 [2024-11-19 08:40:05.377011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.344 [2024-11-19 08:40:05.377058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:26.344 [2024-11-19 08:40:05.377075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.374 ms 00:21:26.344 [2024-11-19 08:40:05.377087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.344 [2024-11-19 08:40:05.377294] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:26.344 [2024-11-19 08:40:05.378300] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:26.344 [2024-11-19 08:40:05.378360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.344 [2024-11-19 08:40:05.378374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:26.344 [2024-11-19 08:40:05.378387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:21:26.344 [2024-11-19 08:40:05.378398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.344 [2024-11-19 08:40:05.379690] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:26.344 [2024-11-19 08:40:05.395442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.344 [2024-11-19 08:40:05.395493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:26.344 [2024-11-19 08:40:05.395529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.753 ms 00:21:26.344 [2024-11-19 08:40:05.395540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.344 [2024-11-19 08:40:05.395682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.344 [2024-11-19 08:40:05.395705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:26.344 [2024-11-19 08:40:05.395719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:26.344 [2024-11-19 08:40:05.395730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.344 [2024-11-19 08:40:05.400200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.344 [2024-11-19 08:40:05.400247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:26.344 [2024-11-19 08:40:05.400279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.412 ms 00:21:26.344 [2024-11-19 08:40:05.400290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.344 [2024-11-19 08:40:05.400412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.344 [2024-11-19 08:40:05.400432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:26.345 [2024-11-19 08:40:05.400444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:26.345 [2024-11-19 08:40:05.400455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.345 [2024-11-19 08:40:05.400493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.345 [2024-11-19 08:40:05.400512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:26.345 [2024-11-19 08:40:05.400524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:26.345 [2024-11-19 08:40:05.400534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.345 [2024-11-19 08:40:05.400573] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:26.345 [2024-11-19 08:40:05.404829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.345 [2024-11-19 08:40:05.404866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:26.345 [2024-11-19 08:40:05.404897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.264 ms 00:21:26.345 [2024-11-19 08:40:05.404909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.345 [2024-11-19 08:40:05.404975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.345 [2024-11-19 08:40:05.404992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:26.345 [2024-11-19 08:40:05.405004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:26.345 [2024-11-19 08:40:05.405014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.345 [2024-11-19 08:40:05.405044] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:26.345 [2024-11-19 08:40:05.405076] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:26.345 [2024-11-19 08:40:05.405117] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:26.345 [2024-11-19 08:40:05.405136] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:26.345 [2024-11-19 08:40:05.405244] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:26.345 [2024-11-19 08:40:05.405259] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:26.345 [2024-11-19 08:40:05.405272] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:26.345 [2024-11-19 08:40:05.405286] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:26.345 [2024-11-19 08:40:05.405303] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:26.345 [2024-11-19 08:40:05.405331] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:26.345 [2024-11-19 08:40:05.405341] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:26.345 [2024-11-19 08:40:05.405351] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:26.345 [2024-11-19 08:40:05.405377] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:26.345 [2024-11-19 08:40:05.405389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.345 [2024-11-19 08:40:05.405400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:26.345 [2024-11-19 08:40:05.405412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:21:26.345 [2024-11-19 08:40:05.405423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.345 [2024-11-19 08:40:05.405523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.345 [2024-11-19 08:40:05.405539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:26.345 [2024-11-19 08:40:05.405555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:26.345 [2024-11-19 08:40:05.405566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.345 [2024-11-19 08:40:05.405742] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:26.345 [2024-11-19 08:40:05.405776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:26.345 [2024-11-19 08:40:05.405805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:26.345 [2024-11-19 08:40:05.405817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.345 [2024-11-19 08:40:05.405829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:26.345 [2024-11-19 08:40:05.405839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:26.345 [2024-11-19 08:40:05.405849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:26.345 [2024-11-19 08:40:05.405861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:26.345 [2024-11-19 08:40:05.405872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:26.345 [2024-11-19 08:40:05.405885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:26.345 [2024-11-19 08:40:05.405896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:26.345 [2024-11-19 08:40:05.405906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:26.345 [2024-11-19 08:40:05.405915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:26.345 [2024-11-19 08:40:05.405940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:26.345 [2024-11-19 08:40:05.405953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:26.345 [2024-11-19 08:40:05.405963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.345 [2024-11-19 08:40:05.405974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:26.345 [2024-11-19 08:40:05.405984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:26.345 [2024-11-19 08:40:05.405993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.345 [2024-11-19 08:40:05.406004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:26.345 [2024-11-19 08:40:05.406019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:26.345 [2024-11-19 08:40:05.406029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.345 [2024-11-19 08:40:05.406039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:26.345 [2024-11-19 08:40:05.406049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:26.345 [2024-11-19 08:40:05.406066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.345 [2024-11-19 08:40:05.406080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:26.345 [2024-11-19 08:40:05.406091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:26.345 [2024-11-19 08:40:05.406101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.345 [2024-11-19 08:40:05.406111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:26.345 [2024-11-19 08:40:05.406123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:26.345 [2024-11-19 08:40:05.406142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.345 [2024-11-19 08:40:05.406160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:26.345 [2024-11-19 08:40:05.406179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:26.345 [2024-11-19 08:40:05.406191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:26.345 [2024-11-19 08:40:05.406202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:26.345 [2024-11-19 08:40:05.406212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:26.345 [2024-11-19 08:40:05.406222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:26.345 [2024-11-19 08:40:05.406232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:26.345 [2024-11-19 08:40:05.406249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:26.345 [2024-11-19 08:40:05.406265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.345 [2024-11-19 08:40:05.406276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:26.345 [2024-11-19 08:40:05.406287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:26.345 [2024-11-19 08:40:05.406297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.345 [2024-11-19 08:40:05.406306] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:26.345 [2024-11-19 08:40:05.406317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:26.345 [2024-11-19 08:40:05.406330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:26.345 [2024-11-19 08:40:05.406352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.345 [2024-11-19 08:40:05.406364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:26.345 [2024-11-19 08:40:05.406375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:26.345 [2024-11-19 08:40:05.406384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:26.345 [2024-11-19 08:40:05.406395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:26.345 [2024-11-19 08:40:05.406405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:26.345 [2024-11-19 08:40:05.406416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:26.345 [2024-11-19 08:40:05.406436] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:26.345 [2024-11-19 08:40:05.406459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:26.345 [2024-11-19 08:40:05.406474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:26.345 [2024-11-19 08:40:05.406485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:26.345 [2024-11-19 08:40:05.406496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:26.346 [2024-11-19 08:40:05.406507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:26.346 [2024-11-19 08:40:05.406518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:26.346 [2024-11-19 08:40:05.406531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:26.346 [2024-11-19 08:40:05.406559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:26.346 [2024-11-19 08:40:05.406573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:26.346 [2024-11-19 08:40:05.406584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:26.346 [2024-11-19 08:40:05.406594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:26.346 [2024-11-19 08:40:05.406611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:26.346 [2024-11-19 08:40:05.406645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:26.346 [2024-11-19 08:40:05.406658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:26.346 [2024-11-19 08:40:05.406675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:26.346 [2024-11-19 08:40:05.406687] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:26.346 [2024-11-19 08:40:05.406699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:26.346 [2024-11-19 08:40:05.406711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:26.346 [2024-11-19 08:40:05.406725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:26.346 [2024-11-19 08:40:05.406744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:26.346 [2024-11-19 08:40:05.406763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:26.346 [2024-11-19 08:40:05.406778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.346 [2024-11-19 08:40:05.406790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:26.346 [2024-11-19 08:40:05.406809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:21:26.346 [2024-11-19 08:40:05.406821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.346 [2024-11-19 08:40:05.439155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.346 [2024-11-19 08:40:05.439214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:26.346 [2024-11-19 08:40:05.439250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.246 ms 00:21:26.346 [2024-11-19 08:40:05.439261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.346 [2024-11-19 08:40:05.439443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.346 [2024-11-19 08:40:05.439468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:26.346 [2024-11-19 08:40:05.439520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:26.346 [2024-11-19 08:40:05.439531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.346 [2024-11-19 08:40:05.488475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.346 [2024-11-19 08:40:05.488798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:26.346 [2024-11-19 08:40:05.488832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.908 ms 00:21:26.346 [2024-11-19 08:40:05.488852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.346 [2024-11-19 08:40:05.489018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.346 [2024-11-19 08:40:05.489040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:26.346 [2024-11-19 08:40:05.489054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:26.346 [2024-11-19 08:40:05.489066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.346 [2024-11-19 08:40:05.489435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.346 [2024-11-19 08:40:05.489454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:26.346 [2024-11-19 08:40:05.489467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:21:26.346 [2024-11-19 08:40:05.489486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.346 [2024-11-19 08:40:05.489639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.346 [2024-11-19 08:40:05.489674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:26.346 [2024-11-19 08:40:05.489688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:21:26.346 [2024-11-19 08:40:05.489698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.346 [2024-11-19 08:40:05.506265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.346 [2024-11-19 08:40:05.506311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:26.346 [2024-11-19 08:40:05.506345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.522 ms 00:21:26.346 [2024-11-19 08:40:05.506371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.346 [2024-11-19 08:40:05.523176] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:26.346 [2024-11-19 08:40:05.523223] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:26.346 [2024-11-19 08:40:05.523258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.346 [2024-11-19 08:40:05.523270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:26.346 [2024-11-19 08:40:05.523282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.746 ms 00:21:26.346 [2024-11-19 08:40:05.523293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.346 [2024-11-19 08:40:05.552292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.346 [2024-11-19 08:40:05.552499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:26.346 [2024-11-19 08:40:05.552549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.904 ms 00:21:26.346 [2024-11-19 08:40:05.552571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.346 [2024-11-19 08:40:05.567973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.346 [2024-11-19 08:40:05.568014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:26.346 [2024-11-19 08:40:05.568046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.265 ms 00:21:26.346 [2024-11-19 08:40:05.568056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.346 [2024-11-19 08:40:05.583081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.346 [2024-11-19 08:40:05.583121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:26.346 [2024-11-19 08:40:05.583154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.933 ms 00:21:26.346 [2024-11-19 08:40:05.583164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.346 [2024-11-19 08:40:05.584042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.346 [2024-11-19 08:40:05.584081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:26.346 [2024-11-19 08:40:05.584096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:21:26.346 [2024-11-19 08:40:05.584107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.606 [2024-11-19 08:40:05.654098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.606 [2024-11-19 08:40:05.654170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:26.606 [2024-11-19 08:40:05.654207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.953 ms 00:21:26.606 [2024-11-19 08:40:05.654219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.606 [2024-11-19 08:40:05.666804] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:26.606 [2024-11-19 08:40:05.680135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.606 [2024-11-19 08:40:05.680463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:26.606 [2024-11-19 08:40:05.680505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.742 ms 00:21:26.606 [2024-11-19 08:40:05.680518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.606 [2024-11-19 08:40:05.680729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.606 [2024-11-19 08:40:05.680760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:26.606 [2024-11-19 08:40:05.680774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:26.606 [2024-11-19 08:40:05.680787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.606 [2024-11-19 08:40:05.680872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.606 [2024-11-19 08:40:05.680888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:26.606 [2024-11-19 08:40:05.680901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:26.606 [2024-11-19 08:40:05.680912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.606 [2024-11-19 08:40:05.680955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.606 [2024-11-19 08:40:05.680971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:26.606 [2024-11-19 08:40:05.680991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:26.606 [2024-11-19 08:40:05.681002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.606 [2024-11-19 08:40:05.681066] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:26.606 [2024-11-19 08:40:05.681083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.606 [2024-11-19 08:40:05.681094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:26.606 [2024-11-19 08:40:05.681105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:26.606 [2024-11-19 08:40:05.681115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.606 [2024-11-19 08:40:05.712127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.606 [2024-11-19 08:40:05.712199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:26.606 [2024-11-19 08:40:05.712217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.986 ms 00:21:26.606 [2024-11-19 08:40:05.712228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.606 [2024-11-19 08:40:05.712376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.606 [2024-11-19 08:40:05.712398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:26.606 [2024-11-19 08:40:05.712411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:26.606 [2024-11-19 08:40:05.712422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.606 [2024-11-19 08:40:05.713641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:26.606 [2024-11-19 08:40:05.717808] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 339.740 ms, result 0 00:21:26.606 [2024-11-19 08:40:05.718604] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:26.606 [2024-11-19 08:40:05.735213] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:27.543  [2024-11-19T08:40:07.773Z] Copying: 24/256 [MB] (24 MBps) [2024-11-19T08:40:09.150Z] Copying: 50/256 [MB] (25 MBps) [2024-11-19T08:40:10.096Z] Copying: 75/256 [MB] (25 MBps) [2024-11-19T08:40:11.032Z] Copying: 101/256 [MB] (25 MBps) [2024-11-19T08:40:12.030Z] Copying: 127/256 [MB] (25 MBps) [2024-11-19T08:40:12.967Z] Copying: 153/256 [MB] (25 MBps) [2024-11-19T08:40:13.903Z] Copying: 178/256 [MB] (24 MBps) [2024-11-19T08:40:14.839Z] Copying: 203/256 [MB] (25 MBps) [2024-11-19T08:40:15.776Z] Copying: 226/256 [MB] (23 MBps) [2024-11-19T08:40:16.034Z] Copying: 251/256 [MB] (24 MBps) [2024-11-19T08:40:16.034Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-19 08:40:15.971872] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:36.738 [2024-11-19 08:40:15.987207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.738 [2024-11-19 08:40:15.987266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:36.738 [2024-11-19 08:40:15.987302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:36.738 [2024-11-19 08:40:15.987326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.738 [2024-11-19 08:40:15.987385] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:36.738 [2024-11-19 08:40:15.991626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.738 [2024-11-19 08:40:15.991686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:36.738 [2024-11-19 08:40:15.991718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.199 ms 00:21:36.738 [2024-11-19 08:40:15.991743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.738 [2024-11-19 08:40:15.993456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.738 [2024-11-19 08:40:15.993516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:36.738 [2024-11-19 08:40:15.993546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.655 ms 00:21:36.738 [2024-11-19 08:40:15.993570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.738 [2024-11-19 08:40:16.002279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.738 [2024-11-19 08:40:16.002335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:36.738 [2024-11-19 08:40:16.002379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.638 ms 00:21:36.738 [2024-11-19 08:40:16.002406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.738 [2024-11-19 08:40:16.011896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.738 [2024-11-19 08:40:16.011949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:36.738 [2024-11-19 08:40:16.011980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.342 ms 00:21:36.738 [2024-11-19 08:40:16.012005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.998 [2024-11-19 08:40:16.050098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.998 [2024-11-19 08:40:16.050160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:36.998 [2024-11-19 08:40:16.050192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.996 ms 00:21:36.998 [2024-11-19 08:40:16.050215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.998 [2024-11-19 08:40:16.071618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.998 [2024-11-19 08:40:16.071674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:36.998 [2024-11-19 08:40:16.071718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.293 ms 00:21:36.998 [2024-11-19 08:40:16.071752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.998 [2024-11-19 08:40:16.072013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.998 [2024-11-19 08:40:16.072051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:36.998 [2024-11-19 08:40:16.072079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:21:36.998 [2024-11-19 08:40:16.072104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.998 [2024-11-19 08:40:16.107539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.998 [2024-11-19 08:40:16.107584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:36.998 [2024-11-19 08:40:16.107627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.393 ms 00:21:36.998 [2024-11-19 08:40:16.107652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.998 [2024-11-19 08:40:16.138914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.998 [2024-11-19 08:40:16.138960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:36.998 [2024-11-19 08:40:16.138986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.132 ms 00:21:36.998 [2024-11-19 08:40:16.139006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.998 [2024-11-19 08:40:16.170168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.998 [2024-11-19 08:40:16.170217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:36.998 [2024-11-19 08:40:16.170245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.076 ms 00:21:36.998 [2024-11-19 08:40:16.170264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.998 [2024-11-19 08:40:16.201145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:36.998 [2024-11-19 08:40:16.201186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:36.998 [2024-11-19 08:40:16.201211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.748 ms 00:21:36.998 [2024-11-19 08:40:16.201228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:36.998 [2024-11-19 08:40:16.201349] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:36.998 [2024-11-19 08:40:16.201405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:36.998 [2024-11-19 08:40:16.201429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:36.998 [2024-11-19 08:40:16.201452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:36.998 [2024-11-19 08:40:16.201473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:36.998 [2024-11-19 08:40:16.201494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:36.998 [2024-11-19 08:40:16.201515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:36.998 [2024-11-19 08:40:16.201537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:36.998 [2024-11-19 08:40:16.201557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:36.998 [2024-11-19 08:40:16.201577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:36.998 [2024-11-19 08:40:16.201598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:36.998 [2024-11-19 08:40:16.201619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.201695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.201752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.201774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.201793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.201830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.201851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.201872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.201893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.201923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.201943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.201964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.201984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.202984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:36.999 [2024-11-19 08:40:16.203661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:37.000 [2024-11-19 08:40:16.203682] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:37.000 [2024-11-19 08:40:16.203695] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e7b4f20c-cbe0-42b7-a2fd-ee64b862d20d 00:21:37.000 [2024-11-19 08:40:16.203706] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:37.000 [2024-11-19 08:40:16.203717] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:37.000 [2024-11-19 08:40:16.203727] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:37.000 [2024-11-19 08:40:16.203738] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:37.000 [2024-11-19 08:40:16.203749] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:37.000 [2024-11-19 08:40:16.203761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:37.000 [2024-11-19 08:40:16.203772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:37.000 [2024-11-19 08:40:16.203781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:37.000 [2024-11-19 08:40:16.203791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:37.000 [2024-11-19 08:40:16.203818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.000 [2024-11-19 08:40:16.203844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:37.000 [2024-11-19 08:40:16.203875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.471 ms 00:21:37.000 [2024-11-19 08:40:16.203885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.000 [2024-11-19 08:40:16.221363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.000 [2024-11-19 08:40:16.221527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:37.000 [2024-11-19 08:40:16.221667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.450 ms 00:21:37.000 [2024-11-19 08:40:16.221720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.000 [2024-11-19 08:40:16.222290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.000 [2024-11-19 08:40:16.222431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:37.000 [2024-11-19 08:40:16.222553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:21:37.000 [2024-11-19 08:40:16.222670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.000 [2024-11-19 08:40:16.269856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.000 [2024-11-19 08:40:16.270090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.000 [2024-11-19 08:40:16.270216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.000 [2024-11-19 08:40:16.270266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.000 [2024-11-19 08:40:16.270402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.000 [2024-11-19 08:40:16.270479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.000 [2024-11-19 08:40:16.270527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.000 [2024-11-19 08:40:16.270563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.000 [2024-11-19 08:40:16.270669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.000 [2024-11-19 08:40:16.270818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.000 [2024-11-19 08:40:16.270867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.000 [2024-11-19 08:40:16.270905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.000 [2024-11-19 08:40:16.271043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.000 [2024-11-19 08:40:16.271094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.000 [2024-11-19 08:40:16.271143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.000 [2024-11-19 08:40:16.271180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.258 [2024-11-19 08:40:16.368348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.258 [2024-11-19 08:40:16.368635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.258 [2024-11-19 08:40:16.368756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.258 [2024-11-19 08:40:16.368803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.258 [2024-11-19 08:40:16.446158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.258 [2024-11-19 08:40:16.446415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.258 [2024-11-19 08:40:16.446552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.258 [2024-11-19 08:40:16.446602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.259 [2024-11-19 08:40:16.446812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.259 [2024-11-19 08:40:16.446869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.259 [2024-11-19 08:40:16.446886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.259 [2024-11-19 08:40:16.446898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.259 [2024-11-19 08:40:16.446935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.259 [2024-11-19 08:40:16.446949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.259 [2024-11-19 08:40:16.446960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.259 [2024-11-19 08:40:16.446979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.259 [2024-11-19 08:40:16.447110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.259 [2024-11-19 08:40:16.447130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.259 [2024-11-19 08:40:16.447142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.259 [2024-11-19 08:40:16.447153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.259 [2024-11-19 08:40:16.447203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.259 [2024-11-19 08:40:16.447221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:37.259 [2024-11-19 08:40:16.447232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.259 [2024-11-19 08:40:16.447244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.259 [2024-11-19 08:40:16.447299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.259 [2024-11-19 08:40:16.447313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.259 [2024-11-19 08:40:16.447324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.259 [2024-11-19 08:40:16.447335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.259 [2024-11-19 08:40:16.447389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.259 [2024-11-19 08:40:16.447405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.259 [2024-11-19 08:40:16.447417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.259 [2024-11-19 08:40:16.447433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.259 [2024-11-19 08:40:16.447631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 460.404 ms, result 0 00:21:38.194 00:21:38.194 00:21:38.453 08:40:17 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76085 00:21:38.453 08:40:17 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:38.453 08:40:17 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76085 00:21:38.453 08:40:17 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76085 ']' 00:21:38.453 08:40:17 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.453 08:40:17 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.453 08:40:17 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.453 08:40:17 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.453 08:40:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:38.453 [2024-11-19 08:40:17.632161] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:38.453 [2024-11-19 08:40:17.632500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76085 ] 00:21:38.711 [2024-11-19 08:40:17.815582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.711 [2024-11-19 08:40:17.910827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.647 08:40:18 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.647 08:40:18 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:39.647 08:40:18 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:39.647 [2024-11-19 08:40:18.915278] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:39.647 [2024-11-19 08:40:18.915588] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:39.907 [2024-11-19 08:40:19.105221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.907 [2024-11-19 08:40:19.105594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:39.907 [2024-11-19 08:40:19.105684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:39.907 [2024-11-19 08:40:19.105700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.907 [2024-11-19 08:40:19.110227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.907 [2024-11-19 08:40:19.110298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:39.907 [2024-11-19 08:40:19.110335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.487 ms 00:21:39.907 [2024-11-19 08:40:19.110348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.907 [2024-11-19 08:40:19.110535] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:39.907 [2024-11-19 08:40:19.111580] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:39.907 [2024-11-19 08:40:19.111646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.907 [2024-11-19 08:40:19.111663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:39.907 [2024-11-19 08:40:19.111679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:21:39.907 [2024-11-19 08:40:19.111701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.907 [2024-11-19 08:40:19.113025] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:39.907 [2024-11-19 08:40:19.131560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.907 [2024-11-19 08:40:19.131711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:39.907 [2024-11-19 08:40:19.131738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.536 ms 00:21:39.907 [2024-11-19 08:40:19.131759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.907 [2024-11-19 08:40:19.132021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.907 [2024-11-19 08:40:19.132051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:39.907 [2024-11-19 08:40:19.132068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:39.907 [2024-11-19 08:40:19.132086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.907 [2024-11-19 08:40:19.137393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.907 [2024-11-19 08:40:19.137491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:39.907 [2024-11-19 08:40:19.137511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.220 ms 00:21:39.907 [2024-11-19 08:40:19.137528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.907 [2024-11-19 08:40:19.138107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.907 [2024-11-19 08:40:19.138144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:39.907 [2024-11-19 08:40:19.138161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:21:39.907 [2024-11-19 08:40:19.138175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.907 [2024-11-19 08:40:19.138233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.907 [2024-11-19 08:40:19.138252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:39.907 [2024-11-19 08:40:19.138280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:39.907 [2024-11-19 08:40:19.138294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.907 [2024-11-19 08:40:19.138359] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:39.907 [2024-11-19 08:40:19.142866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.907 [2024-11-19 08:40:19.142909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:39.907 [2024-11-19 08:40:19.142932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.514 ms 00:21:39.907 [2024-11-19 08:40:19.142961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.907 [2024-11-19 08:40:19.143115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.907 [2024-11-19 08:40:19.143136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:39.907 [2024-11-19 08:40:19.143156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:39.907 [2024-11-19 08:40:19.143175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.907 [2024-11-19 08:40:19.143214] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:39.907 [2024-11-19 08:40:19.143250] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:39.907 [2024-11-19 08:40:19.143331] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:39.907 [2024-11-19 08:40:19.143403] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:39.907 [2024-11-19 08:40:19.143549] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:39.907 [2024-11-19 08:40:19.143569] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:39.907 [2024-11-19 08:40:19.143594] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:39.907 [2024-11-19 08:40:19.143640] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:39.908 [2024-11-19 08:40:19.143665] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:39.908 [2024-11-19 08:40:19.143679] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:39.908 [2024-11-19 08:40:19.143697] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:39.908 [2024-11-19 08:40:19.143710] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:39.908 [2024-11-19 08:40:19.143732] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:39.908 [2024-11-19 08:40:19.143747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.908 [2024-11-19 08:40:19.143765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:39.908 [2024-11-19 08:40:19.143780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:21:39.908 [2024-11-19 08:40:19.143797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.908 [2024-11-19 08:40:19.143931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.908 [2024-11-19 08:40:19.143966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:39.908 [2024-11-19 08:40:19.143980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:21:39.908 [2024-11-19 08:40:19.143998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.908 [2024-11-19 08:40:19.144115] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:39.908 [2024-11-19 08:40:19.144147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:39.908 [2024-11-19 08:40:19.144163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:39.908 [2024-11-19 08:40:19.144181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.908 [2024-11-19 08:40:19.144195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:39.908 [2024-11-19 08:40:19.144212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:39.908 [2024-11-19 08:40:19.144225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:39.908 [2024-11-19 08:40:19.144248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:39.908 [2024-11-19 08:40:19.144261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:39.908 [2024-11-19 08:40:19.144279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:39.908 [2024-11-19 08:40:19.144292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:39.908 [2024-11-19 08:40:19.144324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:39.908 [2024-11-19 08:40:19.144350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:39.908 [2024-11-19 08:40:19.144368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:39.908 [2024-11-19 08:40:19.144380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:39.908 [2024-11-19 08:40:19.144396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.908 [2024-11-19 08:40:19.144408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:39.908 [2024-11-19 08:40:19.144421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:39.908 [2024-11-19 08:40:19.144431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.908 [2024-11-19 08:40:19.144444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:39.908 [2024-11-19 08:40:19.144466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:39.908 [2024-11-19 08:40:19.144480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.908 [2024-11-19 08:40:19.144490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:39.908 [2024-11-19 08:40:19.144504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:39.908 [2024-11-19 08:40:19.144515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.908 [2024-11-19 08:40:19.144527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:39.908 [2024-11-19 08:40:19.144537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:39.908 [2024-11-19 08:40:19.144549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.908 [2024-11-19 08:40:19.144559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:39.908 [2024-11-19 08:40:19.144571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:39.908 [2024-11-19 08:40:19.144582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:39.908 [2024-11-19 08:40:19.144608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:39.908 [2024-11-19 08:40:19.144618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:39.908 [2024-11-19 08:40:19.144650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:39.908 [2024-11-19 08:40:19.144661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:39.908 [2024-11-19 08:40:19.144673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:39.908 [2024-11-19 08:40:19.144699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:39.908 [2024-11-19 08:40:19.144711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:39.908 [2024-11-19 08:40:19.144721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:39.908 [2024-11-19 08:40:19.144734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.908 [2024-11-19 08:40:19.144744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:39.908 [2024-11-19 08:40:19.144756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:39.908 [2024-11-19 08:40:19.144766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.908 [2024-11-19 08:40:19.144777] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:39.908 [2024-11-19 08:40:19.144788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:39.908 [2024-11-19 08:40:19.144804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:39.908 [2024-11-19 08:40:19.144814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:39.908 [2024-11-19 08:40:19.144827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:39.908 [2024-11-19 08:40:19.144837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:39.908 [2024-11-19 08:40:19.144848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:39.908 [2024-11-19 08:40:19.144858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:39.908 [2024-11-19 08:40:19.144879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:39.908 [2024-11-19 08:40:19.144890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:39.908 [2024-11-19 08:40:19.144903] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:39.908 [2024-11-19 08:40:19.144918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:39.908 [2024-11-19 08:40:19.144933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:39.908 [2024-11-19 08:40:19.144960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:39.908 [2024-11-19 08:40:19.144976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:39.908 [2024-11-19 08:40:19.144988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:39.908 [2024-11-19 08:40:19.145001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:39.908 [2024-11-19 08:40:19.145013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:39.908 [2024-11-19 08:40:19.145026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:39.908 [2024-11-19 08:40:19.145038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:39.908 [2024-11-19 08:40:19.145051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:39.908 [2024-11-19 08:40:19.145063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:39.908 [2024-11-19 08:40:19.145076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:39.908 [2024-11-19 08:40:19.145088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:39.908 [2024-11-19 08:40:19.145101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:39.909 [2024-11-19 08:40:19.145114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:39.909 [2024-11-19 08:40:19.145127] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:39.909 [2024-11-19 08:40:19.145141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:39.909 [2024-11-19 08:40:19.145166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:39.909 [2024-11-19 08:40:19.145180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:39.909 [2024-11-19 08:40:19.145197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:39.909 [2024-11-19 08:40:19.145211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:39.909 [2024-11-19 08:40:19.145230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.909 [2024-11-19 08:40:19.145244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:39.909 [2024-11-19 08:40:19.145263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:21:39.909 [2024-11-19 08:40:19.145276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.909 [2024-11-19 08:40:19.179468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.909 [2024-11-19 08:40:19.179795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:39.909 [2024-11-19 08:40:19.179959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.059 ms 00:21:39.909 [2024-11-19 08:40:19.180086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:39.909 [2024-11-19 08:40:19.180409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:39.909 [2024-11-19 08:40:19.180545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:39.909 [2024-11-19 08:40:19.180713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:39.909 [2024-11-19 08:40:19.180766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.167 [2024-11-19 08:40:19.225439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.167 [2024-11-19 08:40:19.225694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:40.167 [2024-11-19 08:40:19.225848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.584 ms 00:21:40.167 [2024-11-19 08:40:19.225904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.167 [2024-11-19 08:40:19.226190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.167 [2024-11-19 08:40:19.226311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:40.168 [2024-11-19 08:40:19.226425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:40.168 [2024-11-19 08:40:19.226448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.226790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.226809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:40.168 [2024-11-19 08:40:19.226828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:21:40.168 [2024-11-19 08:40:19.226840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.227055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.227074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:40.168 [2024-11-19 08:40:19.227089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:21:40.168 [2024-11-19 08:40:19.227101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.247046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.247102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:40.168 [2024-11-19 08:40:19.247131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.906 ms 00:21:40.168 [2024-11-19 08:40:19.247146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.264818] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:40.168 [2024-11-19 08:40:19.264861] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:40.168 [2024-11-19 08:40:19.264904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.264918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:40.168 [2024-11-19 08:40:19.264937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.589 ms 00:21:40.168 [2024-11-19 08:40:19.264961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.296483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.296671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:40.168 [2024-11-19 08:40:19.296714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.400 ms 00:21:40.168 [2024-11-19 08:40:19.296731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.312763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.312804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:40.168 [2024-11-19 08:40:19.312848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.898 ms 00:21:40.168 [2024-11-19 08:40:19.312861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.328987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.329030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:40.168 [2024-11-19 08:40:19.329056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.012 ms 00:21:40.168 [2024-11-19 08:40:19.329070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.330027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.330061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:40.168 [2024-11-19 08:40:19.330084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:21:40.168 [2024-11-19 08:40:19.330098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.411742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.411993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:40.168 [2024-11-19 08:40:19.412043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.600 ms 00:21:40.168 [2024-11-19 08:40:19.412058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.424942] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:40.168 [2024-11-19 08:40:19.438939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.439079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:40.168 [2024-11-19 08:40:19.439105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.737 ms 00:21:40.168 [2024-11-19 08:40:19.439119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.439302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.439326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:40.168 [2024-11-19 08:40:19.439341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:40.168 [2024-11-19 08:40:19.439355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.439447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.439466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:40.168 [2024-11-19 08:40:19.439524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:40.168 [2024-11-19 08:40:19.439553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.439595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.439645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:40.168 [2024-11-19 08:40:19.439674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:40.168 [2024-11-19 08:40:19.439696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.168 [2024-11-19 08:40:19.439750] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:40.168 [2024-11-19 08:40:19.439780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.168 [2024-11-19 08:40:19.439794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:40.168 [2024-11-19 08:40:19.439820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:40.168 [2024-11-19 08:40:19.439833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.520 [2024-11-19 08:40:19.472400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.520 [2024-11-19 08:40:19.472604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:40.520 [2024-11-19 08:40:19.472667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.486 ms 00:21:40.520 [2024-11-19 08:40:19.472684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.520 [2024-11-19 08:40:19.472838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.521 [2024-11-19 08:40:19.472861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:40.521 [2024-11-19 08:40:19.472882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:40.521 [2024-11-19 08:40:19.472901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.521 [2024-11-19 08:40:19.474039] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:40.521 [2024-11-19 08:40:19.478198] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 368.396 ms, result 0 00:21:40.521 [2024-11-19 08:40:19.479410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:40.521 Some configs were skipped because the RPC state that can call them passed over. 00:21:40.521 08:40:19 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:40.797 [2024-11-19 08:40:19.822798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.797 [2024-11-19 08:40:19.823125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:40.797 [2024-11-19 08:40:19.823261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.588 ms 00:21:40.797 [2024-11-19 08:40:19.823412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.797 [2024-11-19 08:40:19.823573] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.356 ms, result 0 00:21:40.797 true 00:21:40.797 08:40:19 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:41.056 [2024-11-19 08:40:20.090672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.056 [2024-11-19 08:40:20.090904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:41.056 [2024-11-19 08:40:20.091046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:21:41.056 [2024-11-19 08:40:20.091072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.056 [2024-11-19 08:40:20.091148] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.523 ms, result 0 00:21:41.056 true 00:21:41.056 08:40:20 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76085 00:21:41.056 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76085 ']' 00:21:41.056 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76085 00:21:41.056 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:41.056 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.056 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76085 00:21:41.056 killing process with pid 76085 00:21:41.056 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.056 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.056 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76085' 00:21:41.056 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76085 00:21:41.056 08:40:20 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76085 00:21:41.993 [2024-11-19 08:40:21.057142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.993 [2024-11-19 08:40:21.057432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:41.993 [2024-11-19 08:40:21.057462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:41.993 [2024-11-19 08:40:21.057478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.993 [2024-11-19 08:40:21.057515] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:41.993 [2024-11-19 08:40:21.060743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.993 [2024-11-19 08:40:21.060778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:41.993 [2024-11-19 08:40:21.060813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.201 ms 00:21:41.993 [2024-11-19 08:40:21.060824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.993 [2024-11-19 08:40:21.061114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.993 [2024-11-19 08:40:21.061131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:41.993 [2024-11-19 08:40:21.061145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:21:41.993 [2024-11-19 08:40:21.061155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.993 [2024-11-19 08:40:21.065073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.993 [2024-11-19 08:40:21.065114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:41.993 [2024-11-19 08:40:21.065153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.892 ms 00:21:41.993 [2024-11-19 08:40:21.065165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.993 [2024-11-19 08:40:21.072307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.993 [2024-11-19 08:40:21.072494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:41.993 [2024-11-19 08:40:21.072525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.093 ms 00:21:41.993 [2024-11-19 08:40:21.072539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.993 [2024-11-19 08:40:21.084130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.993 [2024-11-19 08:40:21.084167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:41.993 [2024-11-19 08:40:21.084204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.523 ms 00:21:41.993 [2024-11-19 08:40:21.084226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.993 [2024-11-19 08:40:21.092510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.993 [2024-11-19 08:40:21.092547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:41.993 [2024-11-19 08:40:21.092583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.236 ms 00:21:41.993 [2024-11-19 08:40:21.092594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.993 [2024-11-19 08:40:21.092764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.993 [2024-11-19 08:40:21.092783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:41.993 [2024-11-19 08:40:21.092798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:41.993 [2024-11-19 08:40:21.092814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.993 [2024-11-19 08:40:21.105349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.993 [2024-11-19 08:40:21.105383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:41.993 [2024-11-19 08:40:21.105416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.509 ms 00:21:41.993 [2024-11-19 08:40:21.105427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.993 [2024-11-19 08:40:21.117464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.993 [2024-11-19 08:40:21.117500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:41.993 [2024-11-19 08:40:21.117542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.987 ms 00:21:41.993 [2024-11-19 08:40:21.117554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.993 [2024-11-19 08:40:21.130024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.993 [2024-11-19 08:40:21.130066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:41.993 [2024-11-19 08:40:21.130108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.417 ms 00:21:41.993 [2024-11-19 08:40:21.130121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.993 [2024-11-19 08:40:21.142206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.993 [2024-11-19 08:40:21.142275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:41.993 [2024-11-19 08:40:21.142323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.991 ms 00:21:41.993 [2024-11-19 08:40:21.142337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.993 [2024-11-19 08:40:21.142392] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:41.993 [2024-11-19 08:40:21.142446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.142991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.143039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.143053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.143085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:41.993 [2024-11-19 08:40:21.143114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.143987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.144001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.144018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.144031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.144048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.144061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.144078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.144092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.144109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.144122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.144139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.144152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.144172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:41.994 [2024-11-19 08:40:21.144195] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:41.994 [2024-11-19 08:40:21.144224] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e7b4f20c-cbe0-42b7-a2fd-ee64b862d20d 00:21:41.994 [2024-11-19 08:40:21.144251] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:41.994 [2024-11-19 08:40:21.144278] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:41.994 [2024-11-19 08:40:21.144290] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:41.994 [2024-11-19 08:40:21.144308] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:41.994 [2024-11-19 08:40:21.144321] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:41.994 [2024-11-19 08:40:21.144352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:41.994 [2024-11-19 08:40:21.144365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:41.994 [2024-11-19 08:40:21.144381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:41.994 [2024-11-19 08:40:21.144392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:41.994 [2024-11-19 08:40:21.144410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.994 [2024-11-19 08:40:21.144423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:41.994 [2024-11-19 08:40:21.144441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.022 ms 00:21:41.994 [2024-11-19 08:40:21.144453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.994 [2024-11-19 08:40:21.161008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.994 [2024-11-19 08:40:21.161052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:41.994 [2024-11-19 08:40:21.161082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.493 ms 00:21:41.994 [2024-11-19 08:40:21.161097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.994 [2024-11-19 08:40:21.161590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.994 [2024-11-19 08:40:21.161633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:41.994 [2024-11-19 08:40:21.161657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:21:41.994 [2024-11-19 08:40:21.161676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.994 [2024-11-19 08:40:21.221942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.994 [2024-11-19 08:40:21.221989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.994 [2024-11-19 08:40:21.222029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.994 [2024-11-19 08:40:21.222042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.995 [2024-11-19 08:40:21.222156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.995 [2024-11-19 08:40:21.222173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.995 [2024-11-19 08:40:21.222190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.995 [2024-11-19 08:40:21.222207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.995 [2024-11-19 08:40:21.222307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.995 [2024-11-19 08:40:21.222326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.995 [2024-11-19 08:40:21.222349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.995 [2024-11-19 08:40:21.222361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.995 [2024-11-19 08:40:21.222394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.995 [2024-11-19 08:40:21.222408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.995 [2024-11-19 08:40:21.222428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.995 [2024-11-19 08:40:21.222441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.253 [2024-11-19 08:40:21.315584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.253 [2024-11-19 08:40:21.315660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:42.253 [2024-11-19 08:40:21.315691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.253 [2024-11-19 08:40:21.315705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.253 [2024-11-19 08:40:21.398439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.253 [2024-11-19 08:40:21.398497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:42.253 [2024-11-19 08:40:21.398539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.253 [2024-11-19 08:40:21.398558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.253 [2024-11-19 08:40:21.398720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.253 [2024-11-19 08:40:21.398757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:42.253 [2024-11-19 08:40:21.398782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.253 [2024-11-19 08:40:21.398796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.253 [2024-11-19 08:40:21.398839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.253 [2024-11-19 08:40:21.398855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:42.253 [2024-11-19 08:40:21.398872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.253 [2024-11-19 08:40:21.398885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.253 [2024-11-19 08:40:21.399046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.253 [2024-11-19 08:40:21.399067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:42.254 [2024-11-19 08:40:21.399097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.254 [2024-11-19 08:40:21.399110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.254 [2024-11-19 08:40:21.399173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.254 [2024-11-19 08:40:21.399192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:42.254 [2024-11-19 08:40:21.399212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.254 [2024-11-19 08:40:21.399225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.254 [2024-11-19 08:40:21.399282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.254 [2024-11-19 08:40:21.399311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:42.254 [2024-11-19 08:40:21.399337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.254 [2024-11-19 08:40:21.399350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.254 [2024-11-19 08:40:21.399415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.254 [2024-11-19 08:40:21.399432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:42.254 [2024-11-19 08:40:21.399451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.254 [2024-11-19 08:40:21.399464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.254 [2024-11-19 08:40:21.399692] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.486 ms, result 0 00:21:43.189 08:40:22 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:43.189 08:40:22 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:43.189 [2024-11-19 08:40:22.383235] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:43.189 [2024-11-19 08:40:22.383683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76149 ] 00:21:43.448 [2024-11-19 08:40:22.564432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.448 [2024-11-19 08:40:22.658760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.706 [2024-11-19 08:40:22.970836] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:43.706 [2024-11-19 08:40:22.970920] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:43.966 [2024-11-19 08:40:23.133016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.966 [2024-11-19 08:40:23.133070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:43.966 [2024-11-19 08:40:23.133106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:43.966 [2024-11-19 08:40:23.133118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.966 [2024-11-19 08:40:23.136508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.967 [2024-11-19 08:40:23.136730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:43.967 [2024-11-19 08:40:23.136758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.360 ms 00:21:43.967 [2024-11-19 08:40:23.136771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.967 [2024-11-19 08:40:23.136973] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:43.967 [2024-11-19 08:40:23.137971] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:43.967 [2024-11-19 08:40:23.138006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.967 [2024-11-19 08:40:23.138021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:43.967 [2024-11-19 08:40:23.138033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:21:43.967 [2024-11-19 08:40:23.138044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.967 [2024-11-19 08:40:23.139379] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:43.967 [2024-11-19 08:40:23.155759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.967 [2024-11-19 08:40:23.156021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:43.967 [2024-11-19 08:40:23.156050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.381 ms 00:21:43.967 [2024-11-19 08:40:23.156063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.967 [2024-11-19 08:40:23.156187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.967 [2024-11-19 08:40:23.156209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:43.967 [2024-11-19 08:40:23.156223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:43.967 [2024-11-19 08:40:23.156233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.967 [2024-11-19 08:40:23.160717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.967 [2024-11-19 08:40:23.160755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:43.967 [2024-11-19 08:40:23.160785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.411 ms 00:21:43.967 [2024-11-19 08:40:23.160795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.967 [2024-11-19 08:40:23.160908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.967 [2024-11-19 08:40:23.160928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:43.967 [2024-11-19 08:40:23.160940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:43.967 [2024-11-19 08:40:23.160966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.967 [2024-11-19 08:40:23.161003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.967 [2024-11-19 08:40:23.161037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:43.967 [2024-11-19 08:40:23.161049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:43.967 [2024-11-19 08:40:23.161058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.967 [2024-11-19 08:40:23.161088] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:43.967 [2024-11-19 08:40:23.165300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.967 [2024-11-19 08:40:23.165365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:43.967 [2024-11-19 08:40:23.165395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:21:43.967 [2024-11-19 08:40:23.165406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.967 [2024-11-19 08:40:23.165466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.967 [2024-11-19 08:40:23.165484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:43.967 [2024-11-19 08:40:23.165495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:43.967 [2024-11-19 08:40:23.165505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.967 [2024-11-19 08:40:23.165529] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:43.967 [2024-11-19 08:40:23.165557] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:43.967 [2024-11-19 08:40:23.165596] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:43.967 [2024-11-19 08:40:23.165614] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:43.967 [2024-11-19 08:40:23.165789] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:43.967 [2024-11-19 08:40:23.165807] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:43.967 [2024-11-19 08:40:23.165821] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:43.967 [2024-11-19 08:40:23.165834] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:43.967 [2024-11-19 08:40:23.165852] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:43.967 [2024-11-19 08:40:23.165864] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:43.967 [2024-11-19 08:40:23.165875] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:43.967 [2024-11-19 08:40:23.165885] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:43.967 [2024-11-19 08:40:23.165895] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:43.967 [2024-11-19 08:40:23.165907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.967 [2024-11-19 08:40:23.165918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:43.967 [2024-11-19 08:40:23.165929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:21:43.967 [2024-11-19 08:40:23.165940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.967 [2024-11-19 08:40:23.166059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.967 [2024-11-19 08:40:23.166075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:43.967 [2024-11-19 08:40:23.166092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:43.967 [2024-11-19 08:40:23.166103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.967 [2024-11-19 08:40:23.166243] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:43.967 [2024-11-19 08:40:23.166483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:43.967 [2024-11-19 08:40:23.166525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.967 [2024-11-19 08:40:23.166538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.967 [2024-11-19 08:40:23.166550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:43.967 [2024-11-19 08:40:23.166560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:43.967 [2024-11-19 08:40:23.166570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:43.967 [2024-11-19 08:40:23.166580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:43.967 [2024-11-19 08:40:23.166590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:43.967 [2024-11-19 08:40:23.166600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.967 [2024-11-19 08:40:23.166612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:43.967 [2024-11-19 08:40:23.166642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:43.967 [2024-11-19 08:40:23.166673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.967 [2024-11-19 08:40:23.166698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:43.967 [2024-11-19 08:40:23.166710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:43.967 [2024-11-19 08:40:23.166721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.967 [2024-11-19 08:40:23.166731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:43.967 [2024-11-19 08:40:23.166756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:43.967 [2024-11-19 08:40:23.166765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.967 [2024-11-19 08:40:23.166775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:43.967 [2024-11-19 08:40:23.166785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:43.967 [2024-11-19 08:40:23.166795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.967 [2024-11-19 08:40:23.166804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:43.967 [2024-11-19 08:40:23.166814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:43.967 [2024-11-19 08:40:23.166824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.967 [2024-11-19 08:40:23.166834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:43.967 [2024-11-19 08:40:23.166844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:43.967 [2024-11-19 08:40:23.166854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.967 [2024-11-19 08:40:23.166863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:43.968 [2024-11-19 08:40:23.166873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:43.968 [2024-11-19 08:40:23.166883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.968 [2024-11-19 08:40:23.166893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:43.968 [2024-11-19 08:40:23.166903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:43.968 [2024-11-19 08:40:23.166912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.968 [2024-11-19 08:40:23.166922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:43.968 [2024-11-19 08:40:23.166932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:43.968 [2024-11-19 08:40:23.166942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.968 [2024-11-19 08:40:23.166968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:43.968 [2024-11-19 08:40:23.166980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:43.968 [2024-11-19 08:40:23.166990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.968 [2024-11-19 08:40:23.167000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:43.968 [2024-11-19 08:40:23.167011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:43.968 [2024-11-19 08:40:23.167022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.968 [2024-11-19 08:40:23.167047] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:43.968 [2024-11-19 08:40:23.167058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:43.968 [2024-11-19 08:40:23.167069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.968 [2024-11-19 08:40:23.167084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.968 [2024-11-19 08:40:23.167095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:43.968 [2024-11-19 08:40:23.167105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:43.968 [2024-11-19 08:40:23.167115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:43.968 [2024-11-19 08:40:23.167125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:43.968 [2024-11-19 08:40:23.167135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:43.968 [2024-11-19 08:40:23.167145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:43.968 [2024-11-19 08:40:23.167157] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:43.968 [2024-11-19 08:40:23.167171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.968 [2024-11-19 08:40:23.167183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:43.968 [2024-11-19 08:40:23.167194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:43.968 [2024-11-19 08:40:23.167204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:43.968 [2024-11-19 08:40:23.167215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:43.968 [2024-11-19 08:40:23.167226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:43.968 [2024-11-19 08:40:23.167236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:43.968 [2024-11-19 08:40:23.167247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:43.968 [2024-11-19 08:40:23.167258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:43.968 [2024-11-19 08:40:23.167268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:43.968 [2024-11-19 08:40:23.167280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:43.968 [2024-11-19 08:40:23.167290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:43.968 [2024-11-19 08:40:23.167301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:43.968 [2024-11-19 08:40:23.167312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:43.968 [2024-11-19 08:40:23.167339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:43.968 [2024-11-19 08:40:23.167349] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:43.968 [2024-11-19 08:40:23.167361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.968 [2024-11-19 08:40:23.167372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:43.968 [2024-11-19 08:40:23.167382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:43.968 [2024-11-19 08:40:23.167393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:43.968 [2024-11-19 08:40:23.167404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:43.968 [2024-11-19 08:40:23.167417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.968 [2024-11-19 08:40:23.167427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:43.968 [2024-11-19 08:40:23.167443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.246 ms 00:21:43.968 [2024-11-19 08:40:23.167463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.968 [2024-11-19 08:40:23.200115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.968 [2024-11-19 08:40:23.200360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:43.968 [2024-11-19 08:40:23.200487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.545 ms 00:21:43.968 [2024-11-19 08:40:23.200537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.968 [2024-11-19 08:40:23.200977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.968 [2024-11-19 08:40:23.201161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:43.968 [2024-11-19 08:40:23.201267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:21:43.968 [2024-11-19 08:40:23.201367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.968 [2024-11-19 08:40:23.252145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.968 [2024-11-19 08:40:23.252406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:43.968 [2024-11-19 08:40:23.252523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.688 ms 00:21:43.968 [2024-11-19 08:40:23.252580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.968 [2024-11-19 08:40:23.252896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.968 [2024-11-19 08:40:23.253049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:43.968 [2024-11-19 08:40:23.253151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:43.968 [2024-11-19 08:40:23.253251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.968 [2024-11-19 08:40:23.253661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.968 [2024-11-19 08:40:23.253781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:43.968 [2024-11-19 08:40:23.253883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:21:43.968 [2024-11-19 08:40:23.254051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.968 [2024-11-19 08:40:23.254252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.968 [2024-11-19 08:40:23.254306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:43.968 [2024-11-19 08:40:23.254415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:21:43.968 [2024-11-19 08:40:23.254536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.227 [2024-11-19 08:40:23.271909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.227 [2024-11-19 08:40:23.272075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:44.227 [2024-11-19 08:40:23.272188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.285 ms 00:21:44.227 [2024-11-19 08:40:23.272241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.227 [2024-11-19 08:40:23.289073] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:44.227 [2024-11-19 08:40:23.289297] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:44.227 [2024-11-19 08:40:23.289444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.227 [2024-11-19 08:40:23.289491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:44.227 [2024-11-19 08:40:23.289599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.963 ms 00:21:44.227 [2024-11-19 08:40:23.289665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.227 [2024-11-19 08:40:23.320146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.227 [2024-11-19 08:40:23.320310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:44.227 [2024-11-19 08:40:23.320339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.352 ms 00:21:44.227 [2024-11-19 08:40:23.320352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.227 [2024-11-19 08:40:23.336912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.228 [2024-11-19 08:40:23.337116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:44.228 [2024-11-19 08:40:23.337143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.439 ms 00:21:44.228 [2024-11-19 08:40:23.337156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.228 [2024-11-19 08:40:23.353304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.228 [2024-11-19 08:40:23.353390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:44.228 [2024-11-19 08:40:23.353406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.050 ms 00:21:44.228 [2024-11-19 08:40:23.353416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.228 [2024-11-19 08:40:23.354298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.228 [2024-11-19 08:40:23.354380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:44.228 [2024-11-19 08:40:23.354395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:21:44.228 [2024-11-19 08:40:23.354406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.228 [2024-11-19 08:40:23.427122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.228 [2024-11-19 08:40:23.427190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:44.228 [2024-11-19 08:40:23.427210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.683 ms 00:21:44.228 [2024-11-19 08:40:23.427221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.228 [2024-11-19 08:40:23.439487] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:44.228 [2024-11-19 08:40:23.453324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.228 [2024-11-19 08:40:23.453396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:44.228 [2024-11-19 08:40:23.453418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.946 ms 00:21:44.228 [2024-11-19 08:40:23.453430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.228 [2024-11-19 08:40:23.453645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.228 [2024-11-19 08:40:23.453724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:44.228 [2024-11-19 08:40:23.453739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:44.228 [2024-11-19 08:40:23.453751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.228 [2024-11-19 08:40:23.453822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.228 [2024-11-19 08:40:23.453840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:44.228 [2024-11-19 08:40:23.453853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:44.228 [2024-11-19 08:40:23.453865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.228 [2024-11-19 08:40:23.453908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.228 [2024-11-19 08:40:23.453928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:44.228 [2024-11-19 08:40:23.453941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:44.228 [2024-11-19 08:40:23.453952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.228 [2024-11-19 08:40:23.453993] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:44.228 [2024-11-19 08:40:23.454010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.228 [2024-11-19 08:40:23.454021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:44.228 [2024-11-19 08:40:23.454032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:44.228 [2024-11-19 08:40:23.454043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.228 [2024-11-19 08:40:23.483798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.228 [2024-11-19 08:40:23.484030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:44.228 [2024-11-19 08:40:23.484058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.724 ms 00:21:44.228 [2024-11-19 08:40:23.484070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.228 [2024-11-19 08:40:23.484209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.228 [2024-11-19 08:40:23.484229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:44.228 [2024-11-19 08:40:23.484241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:44.228 [2024-11-19 08:40:23.484252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.228 [2024-11-19 08:40:23.485263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:44.228 [2024-11-19 08:40:23.489316] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 351.919 ms, result 0 00:21:44.228 [2024-11-19 08:40:23.490221] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:44.228 [2024-11-19 08:40:23.506021] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:45.603  [2024-11-19T08:40:25.834Z] Copying: 26/256 [MB] (26 MBps) [2024-11-19T08:40:26.771Z] Copying: 49/256 [MB] (23 MBps) [2024-11-19T08:40:27.712Z] Copying: 72/256 [MB] (22 MBps) [2024-11-19T08:40:28.647Z] Copying: 94/256 [MB] (22 MBps) [2024-11-19T08:40:29.582Z] Copying: 117/256 [MB] (22 MBps) [2024-11-19T08:40:30.519Z] Copying: 139/256 [MB] (22 MBps) [2024-11-19T08:40:31.895Z] Copying: 162/256 [MB] (23 MBps) [2024-11-19T08:40:32.828Z] Copying: 185/256 [MB] (23 MBps) [2024-11-19T08:40:33.764Z] Copying: 208/256 [MB] (22 MBps) [2024-11-19T08:40:34.700Z] Copying: 231/256 [MB] (23 MBps) [2024-11-19T08:40:34.700Z] Copying: 255/256 [MB] (23 MBps) [2024-11-19T08:40:34.700Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-19 08:40:34.526472] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:55.404 [2024-11-19 08:40:34.539545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.404 [2024-11-19 08:40:34.539594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:55.404 [2024-11-19 08:40:34.539635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:55.404 [2024-11-19 08:40:34.539665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.404 [2024-11-19 08:40:34.539701] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:55.404 [2024-11-19 08:40:34.543109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.404 [2024-11-19 08:40:34.543144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:55.404 [2024-11-19 08:40:34.543177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.385 ms 00:21:55.404 [2024-11-19 08:40:34.543189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.404 [2024-11-19 08:40:34.543488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.404 [2024-11-19 08:40:34.543520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:55.404 [2024-11-19 08:40:34.543534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:21:55.404 [2024-11-19 08:40:34.543545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.404 [2024-11-19 08:40:34.547492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.404 [2024-11-19 08:40:34.547559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:55.404 [2024-11-19 08:40:34.547576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.924 ms 00:21:55.404 [2024-11-19 08:40:34.547587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.404 [2024-11-19 08:40:34.555225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.404 [2024-11-19 08:40:34.555272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:55.404 [2024-11-19 08:40:34.555302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.595 ms 00:21:55.404 [2024-11-19 08:40:34.555313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.404 [2024-11-19 08:40:34.587011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.404 [2024-11-19 08:40:34.587056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:55.404 [2024-11-19 08:40:34.587089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.619 ms 00:21:55.404 [2024-11-19 08:40:34.587100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.404 [2024-11-19 08:40:34.605700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.404 [2024-11-19 08:40:34.605762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:55.404 [2024-11-19 08:40:34.605780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.533 ms 00:21:55.404 [2024-11-19 08:40:34.605806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.404 [2024-11-19 08:40:34.605974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.404 [2024-11-19 08:40:34.605996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:55.404 [2024-11-19 08:40:34.606010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:21:55.404 [2024-11-19 08:40:34.606022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.404 [2024-11-19 08:40:34.638863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.404 [2024-11-19 08:40:34.638925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:55.404 [2024-11-19 08:40:34.638975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.791 ms 00:21:55.404 [2024-11-19 08:40:34.638987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.404 [2024-11-19 08:40:34.669800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.404 [2024-11-19 08:40:34.669842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:55.404 [2024-11-19 08:40:34.669873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.744 ms 00:21:55.404 [2024-11-19 08:40:34.669884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.665 [2024-11-19 08:40:34.700930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.665 [2024-11-19 08:40:34.700976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:55.665 [2024-11-19 08:40:34.700993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.933 ms 00:21:55.665 [2024-11-19 08:40:34.701005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.665 [2024-11-19 08:40:34.731102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.665 [2024-11-19 08:40:34.731143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:55.665 [2024-11-19 08:40:34.731174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.987 ms 00:21:55.665 [2024-11-19 08:40:34.731184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.665 [2024-11-19 08:40:34.731280] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:55.665 [2024-11-19 08:40:34.731308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.731992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.732003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.732014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.732026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.732037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.732048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.732060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:55.665 [2024-11-19 08:40:34.732072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:55.666 [2024-11-19 08:40:34.732568] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:55.666 [2024-11-19 08:40:34.732580] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e7b4f20c-cbe0-42b7-a2fd-ee64b862d20d 00:21:55.666 [2024-11-19 08:40:34.732592] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:55.666 [2024-11-19 08:40:34.732602] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:55.666 [2024-11-19 08:40:34.732630] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:55.666 [2024-11-19 08:40:34.732643] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:55.666 [2024-11-19 08:40:34.732653] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:55.666 [2024-11-19 08:40:34.732664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:55.666 [2024-11-19 08:40:34.732675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:55.666 [2024-11-19 08:40:34.732685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:55.666 [2024-11-19 08:40:34.732695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:55.666 [2024-11-19 08:40:34.732706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.666 [2024-11-19 08:40:34.732731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:55.666 [2024-11-19 08:40:34.732744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:21:55.666 [2024-11-19 08:40:34.732755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.666 [2024-11-19 08:40:34.748782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.666 [2024-11-19 08:40:34.748823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:55.666 [2024-11-19 08:40:34.748841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.998 ms 00:21:55.666 [2024-11-19 08:40:34.748852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.666 [2024-11-19 08:40:34.749332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.666 [2024-11-19 08:40:34.749358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:55.666 [2024-11-19 08:40:34.749372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:21:55.666 [2024-11-19 08:40:34.749384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.666 [2024-11-19 08:40:34.794480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.666 [2024-11-19 08:40:34.794531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:55.666 [2024-11-19 08:40:34.794579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.666 [2024-11-19 08:40:34.794591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.666 [2024-11-19 08:40:34.794754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.666 [2024-11-19 08:40:34.794775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:55.666 [2024-11-19 08:40:34.794788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.666 [2024-11-19 08:40:34.794799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.666 [2024-11-19 08:40:34.794875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.666 [2024-11-19 08:40:34.794894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:55.666 [2024-11-19 08:40:34.794906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.666 [2024-11-19 08:40:34.794916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.666 [2024-11-19 08:40:34.794956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.666 [2024-11-19 08:40:34.795003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:55.666 [2024-11-19 08:40:34.795016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.666 [2024-11-19 08:40:34.795028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.666 [2024-11-19 08:40:34.894302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.666 [2024-11-19 08:40:34.894373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:55.666 [2024-11-19 08:40:34.894393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.666 [2024-11-19 08:40:34.894405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.925 [2024-11-19 08:40:34.982820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.925 [2024-11-19 08:40:34.982896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:55.925 [2024-11-19 08:40:34.982946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.925 [2024-11-19 08:40:34.982975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.925 [2024-11-19 08:40:34.983089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.925 [2024-11-19 08:40:34.983109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.925 [2024-11-19 08:40:34.983122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.925 [2024-11-19 08:40:34.983133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.925 [2024-11-19 08:40:34.983169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.925 [2024-11-19 08:40:34.983183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.925 [2024-11-19 08:40:34.983211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.925 [2024-11-19 08:40:34.983223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.925 [2024-11-19 08:40:34.983355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.925 [2024-11-19 08:40:34.983377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.925 [2024-11-19 08:40:34.983389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.925 [2024-11-19 08:40:34.983401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.925 [2024-11-19 08:40:34.983454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.925 [2024-11-19 08:40:34.983473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:55.925 [2024-11-19 08:40:34.983486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.925 [2024-11-19 08:40:34.983524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.925 [2024-11-19 08:40:34.983576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.925 [2024-11-19 08:40:34.983592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.925 [2024-11-19 08:40:34.983632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.925 [2024-11-19 08:40:34.983651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.925 [2024-11-19 08:40:34.983714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:55.925 [2024-11-19 08:40:34.983732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.925 [2024-11-19 08:40:34.983761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:55.925 [2024-11-19 08:40:34.983773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.925 [2024-11-19 08:40:34.983957] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 444.407 ms, result 0 00:21:56.862 00:21:56.863 00:21:56.863 08:40:35 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:21:56.863 08:40:35 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:57.430 08:40:36 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:57.430 [2024-11-19 08:40:36.606980] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:21:57.430 [2024-11-19 08:40:36.607387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76299 ] 00:21:57.689 [2024-11-19 08:40:36.784871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.689 [2024-11-19 08:40:36.884944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.947 [2024-11-19 08:40:37.204444] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:57.947 [2024-11-19 08:40:37.204743] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.208 [2024-11-19 08:40:37.366686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.208 [2024-11-19 08:40:37.366965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:58.208 [2024-11-19 08:40:37.366997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:58.208 [2024-11-19 08:40:37.367011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.208 [2024-11-19 08:40:37.370298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.208 [2024-11-19 08:40:37.370467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:58.208 [2024-11-19 08:40:37.370496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.249 ms 00:21:58.208 [2024-11-19 08:40:37.370508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.208 [2024-11-19 08:40:37.370778] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:58.208 [2024-11-19 08:40:37.371744] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:58.208 [2024-11-19 08:40:37.371779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.208 [2024-11-19 08:40:37.371793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:58.208 [2024-11-19 08:40:37.371806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:21:58.208 [2024-11-19 08:40:37.371818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.208 [2024-11-19 08:40:37.373099] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:58.208 [2024-11-19 08:40:37.389462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.208 [2024-11-19 08:40:37.389684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:58.208 [2024-11-19 08:40:37.389715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.365 ms 00:21:58.208 [2024-11-19 08:40:37.389729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.208 [2024-11-19 08:40:37.389859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.208 [2024-11-19 08:40:37.389882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:58.208 [2024-11-19 08:40:37.389896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:58.208 [2024-11-19 08:40:37.389907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.208 [2024-11-19 08:40:37.394673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.208 [2024-11-19 08:40:37.394744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:58.208 [2024-11-19 08:40:37.394777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.707 ms 00:21:58.208 [2024-11-19 08:40:37.394788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.208 [2024-11-19 08:40:37.394912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.208 [2024-11-19 08:40:37.394934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:58.208 [2024-11-19 08:40:37.394962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:58.208 [2024-11-19 08:40:37.394980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.208 [2024-11-19 08:40:37.395018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.208 [2024-11-19 08:40:37.395037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:58.208 [2024-11-19 08:40:37.395065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:58.208 [2024-11-19 08:40:37.395091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.208 [2024-11-19 08:40:37.395126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:58.208 [2024-11-19 08:40:37.399532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.208 [2024-11-19 08:40:37.399572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:58.208 [2024-11-19 08:40:37.399589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.415 ms 00:21:58.208 [2024-11-19 08:40:37.399601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.208 [2024-11-19 08:40:37.399702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.208 [2024-11-19 08:40:37.399723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:58.208 [2024-11-19 08:40:37.399736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:58.208 [2024-11-19 08:40:37.399747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.208 [2024-11-19 08:40:37.399781] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:58.208 [2024-11-19 08:40:37.399817] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:58.208 [2024-11-19 08:40:37.399860] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:58.208 [2024-11-19 08:40:37.399881] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:58.208 [2024-11-19 08:40:37.399994] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:58.208 [2024-11-19 08:40:37.400009] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:58.208 [2024-11-19 08:40:37.400024] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:58.208 [2024-11-19 08:40:37.400049] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:58.208 [2024-11-19 08:40:37.400068] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:58.208 [2024-11-19 08:40:37.400080] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:58.208 [2024-11-19 08:40:37.400091] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:58.208 [2024-11-19 08:40:37.400101] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:58.208 [2024-11-19 08:40:37.400112] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:58.208 [2024-11-19 08:40:37.400124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.208 [2024-11-19 08:40:37.400136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:58.208 [2024-11-19 08:40:37.400148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:21:58.208 [2024-11-19 08:40:37.400159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.208 [2024-11-19 08:40:37.400281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.208 [2024-11-19 08:40:37.400299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:58.208 [2024-11-19 08:40:37.400316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:58.209 [2024-11-19 08:40:37.400328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.209 [2024-11-19 08:40:37.400445] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:58.209 [2024-11-19 08:40:37.400462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:58.209 [2024-11-19 08:40:37.400474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.209 [2024-11-19 08:40:37.400487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.209 [2024-11-19 08:40:37.400498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:58.209 [2024-11-19 08:40:37.400509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:58.209 [2024-11-19 08:40:37.400521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:58.209 [2024-11-19 08:40:37.400532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:58.209 [2024-11-19 08:40:37.400542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:58.209 [2024-11-19 08:40:37.400553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.209 [2024-11-19 08:40:37.400563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:58.209 [2024-11-19 08:40:37.400574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:58.209 [2024-11-19 08:40:37.400584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.209 [2024-11-19 08:40:37.400633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:58.209 [2024-11-19 08:40:37.400647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:58.209 [2024-11-19 08:40:37.400657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.209 [2024-11-19 08:40:37.400667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:58.209 [2024-11-19 08:40:37.400678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:58.209 [2024-11-19 08:40:37.400688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.209 [2024-11-19 08:40:37.400698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:58.209 [2024-11-19 08:40:37.400709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:58.209 [2024-11-19 08:40:37.400719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.209 [2024-11-19 08:40:37.400730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:58.209 [2024-11-19 08:40:37.400741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:58.209 [2024-11-19 08:40:37.400751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.209 [2024-11-19 08:40:37.400762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:58.209 [2024-11-19 08:40:37.400772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:58.209 [2024-11-19 08:40:37.400782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.209 [2024-11-19 08:40:37.400792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:58.209 [2024-11-19 08:40:37.400803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:58.209 [2024-11-19 08:40:37.400813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.209 [2024-11-19 08:40:37.400823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:58.209 [2024-11-19 08:40:37.400834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:58.209 [2024-11-19 08:40:37.400843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.209 [2024-11-19 08:40:37.400854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:58.209 [2024-11-19 08:40:37.400865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:58.209 [2024-11-19 08:40:37.400875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.209 [2024-11-19 08:40:37.400885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:58.209 [2024-11-19 08:40:37.400897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:58.209 [2024-11-19 08:40:37.400907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.209 [2024-11-19 08:40:37.400918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:58.209 [2024-11-19 08:40:37.400928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:58.209 [2024-11-19 08:40:37.400938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.209 [2024-11-19 08:40:37.400948] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:58.209 [2024-11-19 08:40:37.400960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:58.209 [2024-11-19 08:40:37.400971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.209 [2024-11-19 08:40:37.400987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.209 [2024-11-19 08:40:37.400998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:58.209 [2024-11-19 08:40:37.401009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:58.209 [2024-11-19 08:40:37.401019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:58.209 [2024-11-19 08:40:37.401030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:58.209 [2024-11-19 08:40:37.401040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:58.209 [2024-11-19 08:40:37.401051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:58.209 [2024-11-19 08:40:37.401063] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:58.209 [2024-11-19 08:40:37.401077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.209 [2024-11-19 08:40:37.401090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:58.209 [2024-11-19 08:40:37.401102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:58.209 [2024-11-19 08:40:37.401113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:58.209 [2024-11-19 08:40:37.401124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:58.209 [2024-11-19 08:40:37.401136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:58.209 [2024-11-19 08:40:37.401147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:58.209 [2024-11-19 08:40:37.401158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:58.209 [2024-11-19 08:40:37.401169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:58.209 [2024-11-19 08:40:37.401181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:58.209 [2024-11-19 08:40:37.401192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:58.209 [2024-11-19 08:40:37.401203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:58.209 [2024-11-19 08:40:37.401214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:58.209 [2024-11-19 08:40:37.401225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:58.209 [2024-11-19 08:40:37.401237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:58.209 [2024-11-19 08:40:37.401248] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:58.209 [2024-11-19 08:40:37.401265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.209 [2024-11-19 08:40:37.401278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:58.209 [2024-11-19 08:40:37.401290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:58.209 [2024-11-19 08:40:37.401301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:58.209 [2024-11-19 08:40:37.401312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:58.209 [2024-11-19 08:40:37.401325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.209 [2024-11-19 08:40:37.401337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:58.209 [2024-11-19 08:40:37.401354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:21:58.209 [2024-11-19 08:40:37.401365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.209 [2024-11-19 08:40:37.434268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.209 [2024-11-19 08:40:37.434564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.209 [2024-11-19 08:40:37.434598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.829 ms 00:21:58.209 [2024-11-19 08:40:37.434633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.209 [2024-11-19 08:40:37.434826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.210 [2024-11-19 08:40:37.434855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:58.210 [2024-11-19 08:40:37.434869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:58.210 [2024-11-19 08:40:37.434880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.210 [2024-11-19 08:40:37.492806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.210 [2024-11-19 08:40:37.492882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:58.210 [2024-11-19 08:40:37.492903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.894 ms 00:21:58.210 [2024-11-19 08:40:37.492921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.210 [2024-11-19 08:40:37.493082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.210 [2024-11-19 08:40:37.493103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.210 [2024-11-19 08:40:37.493118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:58.210 [2024-11-19 08:40:37.493130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.210 [2024-11-19 08:40:37.493468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.210 [2024-11-19 08:40:37.493487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.210 [2024-11-19 08:40:37.493500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:21:58.210 [2024-11-19 08:40:37.493519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.210 [2024-11-19 08:40:37.493702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.210 [2024-11-19 08:40:37.493725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.210 [2024-11-19 08:40:37.493738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:21:58.210 [2024-11-19 08:40:37.493750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.511664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.511716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.469 [2024-11-19 08:40:37.511735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.881 ms 00:21:58.469 [2024-11-19 08:40:37.511747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.529456] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:58.469 [2024-11-19 08:40:37.529503] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:58.469 [2024-11-19 08:40:37.529540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.529552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:58.469 [2024-11-19 08:40:37.529564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.634 ms 00:21:58.469 [2024-11-19 08:40:37.529575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.560145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.560206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:58.469 [2024-11-19 08:40:37.560225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.413 ms 00:21:58.469 [2024-11-19 08:40:37.560238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.576268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.576313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:58.469 [2024-11-19 08:40:37.576331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.924 ms 00:21:58.469 [2024-11-19 08:40:37.576343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.592344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.592512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:58.469 [2024-11-19 08:40:37.592541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.903 ms 00:21:58.469 [2024-11-19 08:40:37.592554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.593411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.593444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:58.469 [2024-11-19 08:40:37.593460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:21:58.469 [2024-11-19 08:40:37.593472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.667571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.667648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:58.469 [2024-11-19 08:40:37.667681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.047 ms 00:21:58.469 [2024-11-19 08:40:37.667694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.681103] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:58.469 [2024-11-19 08:40:37.695197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.695265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:58.469 [2024-11-19 08:40:37.695301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.350 ms 00:21:58.469 [2024-11-19 08:40:37.695319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.695475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.695494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:58.469 [2024-11-19 08:40:37.695535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:58.469 [2024-11-19 08:40:37.695546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.695617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.695665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:58.469 [2024-11-19 08:40:37.695679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:58.469 [2024-11-19 08:40:37.695698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.695737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.695753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:58.469 [2024-11-19 08:40:37.695766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:58.469 [2024-11-19 08:40:37.695776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.695818] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:58.469 [2024-11-19 08:40:37.695834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.695862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:58.469 [2024-11-19 08:40:37.695873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:58.469 [2024-11-19 08:40:37.695883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.725621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.725673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:58.469 [2024-11-19 08:40:37.725724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.707 ms 00:21:58.469 [2024-11-19 08:40:37.725738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.725889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.469 [2024-11-19 08:40:37.725910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:58.469 [2024-11-19 08:40:37.725923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:58.469 [2024-11-19 08:40:37.725934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.469 [2024-11-19 08:40:37.726923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:58.469 [2024-11-19 08:40:37.731040] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 359.886 ms, result 0 00:21:58.469 [2024-11-19 08:40:37.731967] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:58.469 [2024-11-19 08:40:37.747991] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:58.728  [2024-11-19T08:40:38.024Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-11-19 08:40:37.929347] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:58.728 [2024-11-19 08:40:37.940716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-11-19 08:40:37.940757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:58.728 [2024-11-19 08:40:37.940797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:58.728 [2024-11-19 08:40:37.940808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.728 [2024-11-19 08:40:37.940836] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:58.728 [2024-11-19 08:40:37.944171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-11-19 08:40:37.944363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:58.728 [2024-11-19 08:40:37.944390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.316 ms 00:21:58.728 [2024-11-19 08:40:37.944402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.728 [2024-11-19 08:40:37.946114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-11-19 08:40:37.946158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:58.728 [2024-11-19 08:40:37.946176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.676 ms 00:21:58.728 [2024-11-19 08:40:37.946187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.728 [2024-11-19 08:40:37.950713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-11-19 08:40:37.950872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:58.728 [2024-11-19 08:40:37.950899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.495 ms 00:21:58.728 [2024-11-19 08:40:37.950912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.728 [2024-11-19 08:40:37.958916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-11-19 08:40:37.958946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:58.728 [2024-11-19 08:40:37.958977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.971 ms 00:21:58.728 [2024-11-19 08:40:37.958987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.728 [2024-11-19 08:40:37.990212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-11-19 08:40:37.990257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:58.728 [2024-11-19 08:40:37.990275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.173 ms 00:21:58.728 [2024-11-19 08:40:37.990287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.728 [2024-11-19 08:40:38.007991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-11-19 08:40:38.008196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:58.729 [2024-11-19 08:40:38.008235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.654 ms 00:21:58.729 [2024-11-19 08:40:38.008249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.729 [2024-11-19 08:40:38.008415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.729 [2024-11-19 08:40:38.008436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:58.729 [2024-11-19 08:40:38.008449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:58.729 [2024-11-19 08:40:38.008461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.988 [2024-11-19 08:40:38.039853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.988 [2024-11-19 08:40:38.039896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:58.988 [2024-11-19 08:40:38.039914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.352 ms 00:21:58.988 [2024-11-19 08:40:38.039925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.988 [2024-11-19 08:40:38.069620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.988 [2024-11-19 08:40:38.069663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:58.988 [2024-11-19 08:40:38.069680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.591 ms 00:21:58.988 [2024-11-19 08:40:38.069693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.988 [2024-11-19 08:40:38.101183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.988 [2024-11-19 08:40:38.101240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:58.988 [2024-11-19 08:40:38.101274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.443 ms 00:21:58.988 [2024-11-19 08:40:38.101285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.988 [2024-11-19 08:40:38.132410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.988 [2024-11-19 08:40:38.132639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:58.988 [2024-11-19 08:40:38.132668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.026 ms 00:21:58.988 [2024-11-19 08:40:38.132682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.988 [2024-11-19 08:40:38.132731] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:58.988 [2024-11-19 08:40:38.132752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.132990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.133001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.133012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.133023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:58.988 [2024-11-19 08:40:38.133034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:58.989 [2024-11-19 08:40:38.133988] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:58.989 [2024-11-19 08:40:38.134000] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e7b4f20c-cbe0-42b7-a2fd-ee64b862d20d 00:21:58.990 [2024-11-19 08:40:38.134012] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:58.990 [2024-11-19 08:40:38.134022] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:58.990 [2024-11-19 08:40:38.134033] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:58.990 [2024-11-19 08:40:38.134045] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:58.990 [2024-11-19 08:40:38.134055] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:58.990 [2024-11-19 08:40:38.134066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:58.990 [2024-11-19 08:40:38.134082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:58.990 [2024-11-19 08:40:38.134092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:58.990 [2024-11-19 08:40:38.134102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:58.990 [2024-11-19 08:40:38.134113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.990 [2024-11-19 08:40:38.134125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:58.990 [2024-11-19 08:40:38.134137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:21:58.990 [2024-11-19 08:40:38.134148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.990 [2024-11-19 08:40:38.150754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.990 [2024-11-19 08:40:38.150791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:58.990 [2024-11-19 08:40:38.150824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.579 ms 00:21:58.990 [2024-11-19 08:40:38.150835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.990 [2024-11-19 08:40:38.151315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.990 [2024-11-19 08:40:38.151338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:58.990 [2024-11-19 08:40:38.151352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:21:58.990 [2024-11-19 08:40:38.151363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.990 [2024-11-19 08:40:38.199582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.990 [2024-11-19 08:40:38.199661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.990 [2024-11-19 08:40:38.199689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.990 [2024-11-19 08:40:38.199707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.990 [2024-11-19 08:40:38.199825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.990 [2024-11-19 08:40:38.199843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.990 [2024-11-19 08:40:38.199856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.990 [2024-11-19 08:40:38.199880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.990 [2024-11-19 08:40:38.199947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.990 [2024-11-19 08:40:38.199966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.990 [2024-11-19 08:40:38.199978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.990 [2024-11-19 08:40:38.199989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.990 [2024-11-19 08:40:38.200021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.990 [2024-11-19 08:40:38.200035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.990 [2024-11-19 08:40:38.200046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.990 [2024-11-19 08:40:38.200057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.249 [2024-11-19 08:40:38.302520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.249 [2024-11-19 08:40:38.302594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:59.249 [2024-11-19 08:40:38.302629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.249 [2024-11-19 08:40:38.302644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.249 [2024-11-19 08:40:38.388817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.249 [2024-11-19 08:40:38.388884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:59.249 [2024-11-19 08:40:38.388920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.249 [2024-11-19 08:40:38.388931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.249 [2024-11-19 08:40:38.389009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.249 [2024-11-19 08:40:38.389026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:59.249 [2024-11-19 08:40:38.389037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.249 [2024-11-19 08:40:38.389047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.249 [2024-11-19 08:40:38.389079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.249 [2024-11-19 08:40:38.389099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:59.249 [2024-11-19 08:40:38.389110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.249 [2024-11-19 08:40:38.389121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.249 [2024-11-19 08:40:38.389252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.249 [2024-11-19 08:40:38.389271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:59.249 [2024-11-19 08:40:38.389283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.249 [2024-11-19 08:40:38.389299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.249 [2024-11-19 08:40:38.389353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.249 [2024-11-19 08:40:38.389371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:59.249 [2024-11-19 08:40:38.389389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.249 [2024-11-19 08:40:38.389399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.249 [2024-11-19 08:40:38.389447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.249 [2024-11-19 08:40:38.389462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:59.249 [2024-11-19 08:40:38.389473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.249 [2024-11-19 08:40:38.389483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.249 [2024-11-19 08:40:38.389534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.249 [2024-11-19 08:40:38.389555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:59.249 [2024-11-19 08:40:38.389581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.249 [2024-11-19 08:40:38.389592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.249 [2024-11-19 08:40:38.389812] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 449.091 ms, result 0 00:22:00.184 00:22:00.184 00:22:00.184 08:40:39 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:00.184 08:40:39 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76330 00:22:00.184 08:40:39 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76330 00:22:00.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.184 08:40:39 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76330 ']' 00:22:00.184 08:40:39 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.184 08:40:39 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.184 08:40:39 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.184 08:40:39 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.184 08:40:39 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:00.442 [2024-11-19 08:40:39.497074] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:00.443 [2024-11-19 08:40:39.497471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76330 ] 00:22:00.443 [2024-11-19 08:40:39.677550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.701 [2024-11-19 08:40:39.781090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.270 08:40:40 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.270 08:40:40 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:01.270 08:40:40 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:01.873 [2024-11-19 08:40:40.867348] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:01.873 [2024-11-19 08:40:40.867664] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:01.873 [2024-11-19 08:40:41.054299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.873 [2024-11-19 08:40:41.054373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:01.873 [2024-11-19 08:40:41.054427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:01.873 [2024-11-19 08:40:41.054442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.873 [2024-11-19 08:40:41.058563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.873 [2024-11-19 08:40:41.058638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:01.873 [2024-11-19 08:40:41.058678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.091 ms 00:22:01.873 [2024-11-19 08:40:41.058690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.873 [2024-11-19 08:40:41.058910] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:01.873 [2024-11-19 08:40:41.059919] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:01.873 [2024-11-19 08:40:41.060105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.873 [2024-11-19 08:40:41.060126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:01.873 [2024-11-19 08:40:41.060141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.206 ms 00:22:01.873 [2024-11-19 08:40:41.060153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.873 [2024-11-19 08:40:41.061433] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:01.873 [2024-11-19 08:40:41.076894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.873 [2024-11-19 08:40:41.076964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:01.873 [2024-11-19 08:40:41.076985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.467 ms 00:22:01.873 [2024-11-19 08:40:41.077004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.873 [2024-11-19 08:40:41.077125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.873 [2024-11-19 08:40:41.077156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:01.873 [2024-11-19 08:40:41.077171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:01.873 [2024-11-19 08:40:41.077188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.873 [2024-11-19 08:40:41.082051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.874 [2024-11-19 08:40:41.082302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:01.874 [2024-11-19 08:40:41.082334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.790 ms 00:22:01.874 [2024-11-19 08:40:41.082355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.874 [2024-11-19 08:40:41.082515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.874 [2024-11-19 08:40:41.082543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:01.874 [2024-11-19 08:40:41.082557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:22:01.874 [2024-11-19 08:40:41.082570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.874 [2024-11-19 08:40:41.082673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.874 [2024-11-19 08:40:41.082695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:01.874 [2024-11-19 08:40:41.082715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:01.874 [2024-11-19 08:40:41.082749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.874 [2024-11-19 08:40:41.082788] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:01.874 [2024-11-19 08:40:41.087295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.874 [2024-11-19 08:40:41.087337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:01.874 [2024-11-19 08:40:41.087364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.509 ms 00:22:01.874 [2024-11-19 08:40:41.087378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.874 [2024-11-19 08:40:41.087483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.874 [2024-11-19 08:40:41.087516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:01.874 [2024-11-19 08:40:41.087540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:01.874 [2024-11-19 08:40:41.087560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.874 [2024-11-19 08:40:41.087602] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:01.874 [2024-11-19 08:40:41.087651] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:01.874 [2024-11-19 08:40:41.087716] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:01.874 [2024-11-19 08:40:41.087743] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:01.874 [2024-11-19 08:40:41.087875] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:01.874 [2024-11-19 08:40:41.087893] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:01.874 [2024-11-19 08:40:41.087934] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:01.874 [2024-11-19 08:40:41.087956] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:01.874 [2024-11-19 08:40:41.087976] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:01.874 [2024-11-19 08:40:41.087991] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:01.874 [2024-11-19 08:40:41.088008] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:01.874 [2024-11-19 08:40:41.088021] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:01.874 [2024-11-19 08:40:41.088042] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:01.874 [2024-11-19 08:40:41.088069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.874 [2024-11-19 08:40:41.088089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:01.874 [2024-11-19 08:40:41.088103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:22:01.874 [2024-11-19 08:40:41.088120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.874 [2024-11-19 08:40:41.088243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.874 [2024-11-19 08:40:41.088266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:01.874 [2024-11-19 08:40:41.088281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:22:01.874 [2024-11-19 08:40:41.088299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.874 [2024-11-19 08:40:41.088416] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:01.874 [2024-11-19 08:40:41.088442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:01.874 [2024-11-19 08:40:41.088458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:01.874 [2024-11-19 08:40:41.088476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.874 [2024-11-19 08:40:41.088490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:01.874 [2024-11-19 08:40:41.088508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:01.874 [2024-11-19 08:40:41.088521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:01.874 [2024-11-19 08:40:41.088547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:01.874 [2024-11-19 08:40:41.088560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:01.874 [2024-11-19 08:40:41.088578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:01.874 [2024-11-19 08:40:41.088590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:01.874 [2024-11-19 08:40:41.088652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:01.874 [2024-11-19 08:40:41.088668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:01.874 [2024-11-19 08:40:41.088687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:01.874 [2024-11-19 08:40:41.088702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:01.874 [2024-11-19 08:40:41.088719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.874 [2024-11-19 08:40:41.088732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:01.874 [2024-11-19 08:40:41.088749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:01.874 [2024-11-19 08:40:41.088761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.874 [2024-11-19 08:40:41.088779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:01.874 [2024-11-19 08:40:41.088806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:01.874 [2024-11-19 08:40:41.088825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.874 [2024-11-19 08:40:41.088838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:01.874 [2024-11-19 08:40:41.088859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:01.874 [2024-11-19 08:40:41.088872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.874 [2024-11-19 08:40:41.088889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:01.874 [2024-11-19 08:40:41.088902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:01.874 [2024-11-19 08:40:41.088919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.874 [2024-11-19 08:40:41.088932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:01.874 [2024-11-19 08:40:41.088949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:01.874 [2024-11-19 08:40:41.088962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.874 [2024-11-19 08:40:41.088981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:01.874 [2024-11-19 08:40:41.088994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:01.874 [2024-11-19 08:40:41.089011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:01.874 [2024-11-19 08:40:41.089023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:01.874 [2024-11-19 08:40:41.089041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:01.874 [2024-11-19 08:40:41.089054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:01.874 [2024-11-19 08:40:41.089070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:01.874 [2024-11-19 08:40:41.089083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:01.874 [2024-11-19 08:40:41.089104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.874 [2024-11-19 08:40:41.089117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:01.874 [2024-11-19 08:40:41.089134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:01.874 [2024-11-19 08:40:41.089146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.874 [2024-11-19 08:40:41.089162] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:01.874 [2024-11-19 08:40:41.089176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:01.874 [2024-11-19 08:40:41.089202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:01.874 [2024-11-19 08:40:41.089215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.874 [2024-11-19 08:40:41.089234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:01.874 [2024-11-19 08:40:41.089247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:01.874 [2024-11-19 08:40:41.089264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:01.874 [2024-11-19 08:40:41.089277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:01.874 [2024-11-19 08:40:41.089293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:01.874 [2024-11-19 08:40:41.089306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:01.874 [2024-11-19 08:40:41.089325] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:01.874 [2024-11-19 08:40:41.089340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:01.874 [2024-11-19 08:40:41.089365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:01.874 [2024-11-19 08:40:41.089380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:01.874 [2024-11-19 08:40:41.089397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:01.874 [2024-11-19 08:40:41.089410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:01.874 [2024-11-19 08:40:41.089429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:01.874 [2024-11-19 08:40:41.089443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:01.875 [2024-11-19 08:40:41.089461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:01.875 [2024-11-19 08:40:41.089474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:01.875 [2024-11-19 08:40:41.089492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:01.875 [2024-11-19 08:40:41.089505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:01.875 [2024-11-19 08:40:41.089522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:01.875 [2024-11-19 08:40:41.089536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:01.875 [2024-11-19 08:40:41.089553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:01.875 [2024-11-19 08:40:41.089567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:01.875 [2024-11-19 08:40:41.089585] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:01.875 [2024-11-19 08:40:41.089599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:01.875 [2024-11-19 08:40:41.089657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:01.875 [2024-11-19 08:40:41.089674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:01.875 [2024-11-19 08:40:41.089693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:01.875 [2024-11-19 08:40:41.089707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:01.875 [2024-11-19 08:40:41.089726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.875 [2024-11-19 08:40:41.089745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:01.875 [2024-11-19 08:40:41.089765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.375 ms 00:22:01.875 [2024-11-19 08:40:41.089779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.875 [2024-11-19 08:40:41.125320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.875 [2024-11-19 08:40:41.125377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:01.875 [2024-11-19 08:40:41.125419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.444 ms 00:22:01.875 [2024-11-19 08:40:41.125431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.875 [2024-11-19 08:40:41.125659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.875 [2024-11-19 08:40:41.125699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:01.875 [2024-11-19 08:40:41.125716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:22:01.875 [2024-11-19 08:40:41.125728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.169893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.169948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.134 [2024-11-19 08:40:41.169988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.122 ms 00:22:02.134 [2024-11-19 08:40:41.170002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.170168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.170190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.134 [2024-11-19 08:40:41.170210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:02.134 [2024-11-19 08:40:41.170224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.170602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.170648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.134 [2024-11-19 08:40:41.170679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:22:02.134 [2024-11-19 08:40:41.170693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.170863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.170883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.134 [2024-11-19 08:40:41.170903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:22:02.134 [2024-11-19 08:40:41.170916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.190362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.190410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.134 [2024-11-19 08:40:41.190453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.407 ms 00:22:02.134 [2024-11-19 08:40:41.190467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.206946] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:02.134 [2024-11-19 08:40:41.207008] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:02.134 [2024-11-19 08:40:41.207054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.207070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:02.134 [2024-11-19 08:40:41.207090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.417 ms 00:22:02.134 [2024-11-19 08:40:41.207104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.236719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.236904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:02.134 [2024-11-19 08:40:41.236947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.505 ms 00:22:02.134 [2024-11-19 08:40:41.236963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.253129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.253172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:02.134 [2024-11-19 08:40:41.253218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.044 ms 00:22:02.134 [2024-11-19 08:40:41.253232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.268721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.268766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:02.134 [2024-11-19 08:40:41.268793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.381 ms 00:22:02.134 [2024-11-19 08:40:41.268807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.269684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.269714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:02.134 [2024-11-19 08:40:41.269736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:22:02.134 [2024-11-19 08:40:41.269750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.353984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.354265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:02.134 [2024-11-19 08:40:41.354303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.193 ms 00:22:02.134 [2024-11-19 08:40:41.354318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.366100] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:02.134 [2024-11-19 08:40:41.379100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.379197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:02.134 [2024-11-19 08:40:41.379225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.646 ms 00:22:02.134 [2024-11-19 08:40:41.379244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.379375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.379403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:02.134 [2024-11-19 08:40:41.379418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:02.134 [2024-11-19 08:40:41.379435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.379498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.379557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:02.134 [2024-11-19 08:40:41.379572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:02.134 [2024-11-19 08:40:41.379590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.379661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.379689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:02.134 [2024-11-19 08:40:41.379705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:02.134 [2024-11-19 08:40:41.379727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.379779] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:02.134 [2024-11-19 08:40:41.379810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.134 [2024-11-19 08:40:41.379825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:02.134 [2024-11-19 08:40:41.379868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:02.134 [2024-11-19 08:40:41.379881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.134 [2024-11-19 08:40:41.408673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.135 [2024-11-19 08:40:41.408716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:02.135 [2024-11-19 08:40:41.408758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.727 ms 00:22:02.135 [2024-11-19 08:40:41.408772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.135 [2024-11-19 08:40:41.408931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.135 [2024-11-19 08:40:41.408952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:02.135 [2024-11-19 08:40:41.408972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:22:02.135 [2024-11-19 08:40:41.408990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.135 [2024-11-19 08:40:41.410189] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:02.135 [2024-11-19 08:40:41.414264] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 355.322 ms, result 0 00:22:02.135 [2024-11-19 08:40:41.415301] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:02.394 Some configs were skipped because the RPC state that can call them passed over. 00:22:02.394 08:40:41 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:02.652 [2024-11-19 08:40:41.721279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.652 [2024-11-19 08:40:41.721506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:02.652 [2024-11-19 08:40:41.721687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.565 ms 00:22:02.652 [2024-11-19 08:40:41.721879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.652 [2024-11-19 08:40:41.721997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.273 ms, result 0 00:22:02.652 true 00:22:02.652 08:40:41 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:02.911 [2024-11-19 08:40:42.013170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.911 [2024-11-19 08:40:42.013399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:02.911 [2024-11-19 08:40:42.013538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:22:02.911 [2024-11-19 08:40:42.013598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.911 [2024-11-19 08:40:42.013802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.709 ms, result 0 00:22:02.911 true 00:22:02.911 08:40:42 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76330 00:22:02.911 08:40:42 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76330 ']' 00:22:02.911 08:40:42 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76330 00:22:02.911 08:40:42 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:02.911 08:40:42 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.911 08:40:42 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76330 00:22:02.911 killing process with pid 76330 00:22:02.911 08:40:42 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.911 08:40:42 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.911 08:40:42 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76330' 00:22:02.911 08:40:42 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76330 00:22:02.911 08:40:42 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76330 00:22:03.848 [2024-11-19 08:40:42.965185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.848 [2024-11-19 08:40:42.965276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:03.848 [2024-11-19 08:40:42.965298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:03.848 [2024-11-19 08:40:42.965310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.848 [2024-11-19 08:40:42.965341] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:03.848 [2024-11-19 08:40:42.968439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.848 [2024-11-19 08:40:42.968683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:03.848 [2024-11-19 08:40:42.968719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.071 ms 00:22:03.848 [2024-11-19 08:40:42.968733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.848 [2024-11-19 08:40:42.969043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.848 [2024-11-19 08:40:42.969063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:03.848 [2024-11-19 08:40:42.969077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:22:03.848 [2024-11-19 08:40:42.969088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.848 [2024-11-19 08:40:42.973040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.848 [2024-11-19 08:40:42.973087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:03.848 [2024-11-19 08:40:42.973126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.923 ms 00:22:03.848 [2024-11-19 08:40:42.973153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.848 [2024-11-19 08:40:42.979972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.848 [2024-11-19 08:40:42.980169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:03.848 [2024-11-19 08:40:42.980202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.752 ms 00:22:03.848 [2024-11-19 08:40:42.980215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.848 [2024-11-19 08:40:42.991960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.848 [2024-11-19 08:40:42.992000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:03.848 [2024-11-19 08:40:42.992037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.672 ms 00:22:03.848 [2024-11-19 08:40:42.992057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.848 [2024-11-19 08:40:43.000212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.848 [2024-11-19 08:40:43.000252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:03.848 [2024-11-19 08:40:43.000290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.103 ms 00:22:03.848 [2024-11-19 08:40:43.000301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.848 [2024-11-19 08:40:43.000445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.848 [2024-11-19 08:40:43.000464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:03.848 [2024-11-19 08:40:43.000478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:22:03.848 [2024-11-19 08:40:43.000489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.848 [2024-11-19 08:40:43.013319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.848 [2024-11-19 08:40:43.013358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:03.848 [2024-11-19 08:40:43.013393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.803 ms 00:22:03.848 [2024-11-19 08:40:43.013403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.848 [2024-11-19 08:40:43.025148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.848 [2024-11-19 08:40:43.025185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:03.848 [2024-11-19 08:40:43.025229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.690 ms 00:22:03.848 [2024-11-19 08:40:43.025241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.848 [2024-11-19 08:40:43.036791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.848 [2024-11-19 08:40:43.036831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:03.848 [2024-11-19 08:40:43.036873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.495 ms 00:22:03.848 [2024-11-19 08:40:43.036886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.848 [2024-11-19 08:40:43.048036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.848 [2024-11-19 08:40:43.048074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:03.848 [2024-11-19 08:40:43.048114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.063 ms 00:22:03.848 [2024-11-19 08:40:43.048126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.848 [2024-11-19 08:40:43.048178] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:03.848 [2024-11-19 08:40:43.048202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:03.848 [2024-11-19 08:40:43.048222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:03.848 [2024-11-19 08:40:43.048235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:03.848 [2024-11-19 08:40:43.048252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:03.848 [2024-11-19 08:40:43.048264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:03.848 [2024-11-19 08:40:43.048285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:03.848 [2024-11-19 08:40:43.048298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:03.848 [2024-11-19 08:40:43.048314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:03.848 [2024-11-19 08:40:43.048327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.048986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:03.849 [2024-11-19 08:40:43.049780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:03.850 [2024-11-19 08:40:43.049794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:03.850 [2024-11-19 08:40:43.049813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:03.850 [2024-11-19 08:40:43.049834] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:03.850 [2024-11-19 08:40:43.049865] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e7b4f20c-cbe0-42b7-a2fd-ee64b862d20d 00:22:03.850 [2024-11-19 08:40:43.049891] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:03.850 [2024-11-19 08:40:43.049918] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:03.850 [2024-11-19 08:40:43.049930] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:03.850 [2024-11-19 08:40:43.049947] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:03.850 [2024-11-19 08:40:43.049959] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:03.850 [2024-11-19 08:40:43.049975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:03.850 [2024-11-19 08:40:43.049988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:03.850 [2024-11-19 08:40:43.050003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:03.850 [2024-11-19 08:40:43.050014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:03.850 [2024-11-19 08:40:43.050031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.850 [2024-11-19 08:40:43.050044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:03.850 [2024-11-19 08:40:43.050062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:22:03.850 [2024-11-19 08:40:43.050074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.850 [2024-11-19 08:40:43.067045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.850 [2024-11-19 08:40:43.067086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:03.850 [2024-11-19 08:40:43.067132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.908 ms 00:22:03.850 [2024-11-19 08:40:43.067145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.850 [2024-11-19 08:40:43.067697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.850 [2024-11-19 08:40:43.067725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:03.850 [2024-11-19 08:40:43.067746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:22:03.850 [2024-11-19 08:40:43.067766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.850 [2024-11-19 08:40:43.129732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.850 [2024-11-19 08:40:43.129801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:03.850 [2024-11-19 08:40:43.129824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.850 [2024-11-19 08:40:43.129838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.850 [2024-11-19 08:40:43.129979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.850 [2024-11-19 08:40:43.129999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:03.850 [2024-11-19 08:40:43.130014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.850 [2024-11-19 08:40:43.130029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.850 [2024-11-19 08:40:43.130102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.850 [2024-11-19 08:40:43.130122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:03.850 [2024-11-19 08:40:43.130139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.850 [2024-11-19 08:40:43.130150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.850 [2024-11-19 08:40:43.130179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.850 [2024-11-19 08:40:43.130193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:03.850 [2024-11-19 08:40:43.130206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.850 [2024-11-19 08:40:43.130218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.110 [2024-11-19 08:40:43.233601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.110 [2024-11-19 08:40:43.233757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:04.110 [2024-11-19 08:40:43.233787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.110 [2024-11-19 08:40:43.233801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.110 [2024-11-19 08:40:43.312849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.110 [2024-11-19 08:40:43.312914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:04.110 [2024-11-19 08:40:43.312966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.110 [2024-11-19 08:40:43.312997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.110 [2024-11-19 08:40:43.313111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.110 [2024-11-19 08:40:43.313130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:04.110 [2024-11-19 08:40:43.313154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.110 [2024-11-19 08:40:43.313167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.110 [2024-11-19 08:40:43.313209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.110 [2024-11-19 08:40:43.313225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:04.110 [2024-11-19 08:40:43.313241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.110 [2024-11-19 08:40:43.313254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.110 [2024-11-19 08:40:43.313387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.110 [2024-11-19 08:40:43.313407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:04.110 [2024-11-19 08:40:43.313426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.110 [2024-11-19 08:40:43.313438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.110 [2024-11-19 08:40:43.313498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.110 [2024-11-19 08:40:43.313517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:04.110 [2024-11-19 08:40:43.313535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.110 [2024-11-19 08:40:43.313548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.110 [2024-11-19 08:40:43.313602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.110 [2024-11-19 08:40:43.313668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:04.110 [2024-11-19 08:40:43.313693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.110 [2024-11-19 08:40:43.313706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.110 [2024-11-19 08:40:43.313771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.110 [2024-11-19 08:40:43.313789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:04.110 [2024-11-19 08:40:43.313807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.110 [2024-11-19 08:40:43.313820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.110 [2024-11-19 08:40:43.314053] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.814 ms, result 0 00:22:05.047 08:40:44 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:05.047 [2024-11-19 08:40:44.294055] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:05.047 [2024-11-19 08:40:44.294547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76393 ] 00:22:05.306 [2024-11-19 08:40:44.489394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.306 [2024-11-19 08:40:44.585251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.875 [2024-11-19 08:40:44.909271] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:05.875 [2024-11-19 08:40:44.909382] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:05.875 [2024-11-19 08:40:45.073111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.875 [2024-11-19 08:40:45.073169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:05.875 [2024-11-19 08:40:45.073206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:05.875 [2024-11-19 08:40:45.073218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.875 [2024-11-19 08:40:45.076702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.875 [2024-11-19 08:40:45.076745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:05.875 [2024-11-19 08:40:45.076779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.433 ms 00:22:05.875 [2024-11-19 08:40:45.076790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.875 [2024-11-19 08:40:45.076941] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:05.875 [2024-11-19 08:40:45.077912] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:05.875 [2024-11-19 08:40:45.077964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.875 [2024-11-19 08:40:45.077988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:05.875 [2024-11-19 08:40:45.078008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:22:05.875 [2024-11-19 08:40:45.078020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.875 [2024-11-19 08:40:45.079422] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:05.875 [2024-11-19 08:40:45.095284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.875 [2024-11-19 08:40:45.095351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:05.875 [2024-11-19 08:40:45.095386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.863 ms 00:22:05.875 [2024-11-19 08:40:45.095397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.875 [2024-11-19 08:40:45.095545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.875 [2024-11-19 08:40:45.095569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:05.875 [2024-11-19 08:40:45.095584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:05.875 [2024-11-19 08:40:45.095595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.875 [2024-11-19 08:40:45.100202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.875 [2024-11-19 08:40:45.100248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:05.875 [2024-11-19 08:40:45.100281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.506 ms 00:22:05.875 [2024-11-19 08:40:45.100292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.875 [2024-11-19 08:40:45.100442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.875 [2024-11-19 08:40:45.100465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:05.875 [2024-11-19 08:40:45.100478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:05.875 [2024-11-19 08:40:45.100489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.875 [2024-11-19 08:40:45.100541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.875 [2024-11-19 08:40:45.100569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:05.875 [2024-11-19 08:40:45.100582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:05.875 [2024-11-19 08:40:45.100592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.875 [2024-11-19 08:40:45.100676] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:05.875 [2024-11-19 08:40:45.104972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.875 [2024-11-19 08:40:45.105208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:05.875 [2024-11-19 08:40:45.105238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:22:05.875 [2024-11-19 08:40:45.105250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.875 [2024-11-19 08:40:45.105342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.875 [2024-11-19 08:40:45.105363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:05.875 [2024-11-19 08:40:45.105376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:05.875 [2024-11-19 08:40:45.105387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.875 [2024-11-19 08:40:45.105433] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:05.875 [2024-11-19 08:40:45.105492] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:05.875 [2024-11-19 08:40:45.105558] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:05.875 [2024-11-19 08:40:45.105578] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:05.875 [2024-11-19 08:40:45.105748] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:05.875 [2024-11-19 08:40:45.105773] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:05.875 [2024-11-19 08:40:45.105793] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:05.875 [2024-11-19 08:40:45.105808] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:05.875 [2024-11-19 08:40:45.105828] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:05.875 [2024-11-19 08:40:45.105840] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:05.875 [2024-11-19 08:40:45.105850] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:05.875 [2024-11-19 08:40:45.105860] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:05.875 [2024-11-19 08:40:45.105870] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:05.875 [2024-11-19 08:40:45.105882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.875 [2024-11-19 08:40:45.105894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:05.875 [2024-11-19 08:40:45.105905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:22:05.875 [2024-11-19 08:40:45.105916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.876 [2024-11-19 08:40:45.106069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.876 [2024-11-19 08:40:45.106088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:05.876 [2024-11-19 08:40:45.106106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:05.876 [2024-11-19 08:40:45.106116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.876 [2024-11-19 08:40:45.106227] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:05.876 [2024-11-19 08:40:45.106245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:05.876 [2024-11-19 08:40:45.106258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:05.876 [2024-11-19 08:40:45.106270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.876 [2024-11-19 08:40:45.106280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:05.876 [2024-11-19 08:40:45.106290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:05.876 [2024-11-19 08:40:45.106301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:05.876 [2024-11-19 08:40:45.106311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:05.876 [2024-11-19 08:40:45.106338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:05.876 [2024-11-19 08:40:45.106348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:05.876 [2024-11-19 08:40:45.106357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:05.876 [2024-11-19 08:40:45.106367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:05.876 [2024-11-19 08:40:45.106376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:05.876 [2024-11-19 08:40:45.106399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:05.876 [2024-11-19 08:40:45.106410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:05.876 [2024-11-19 08:40:45.106420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.876 [2024-11-19 08:40:45.106431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:05.876 [2024-11-19 08:40:45.106441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:05.876 [2024-11-19 08:40:45.106450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.876 [2024-11-19 08:40:45.106460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:05.876 [2024-11-19 08:40:45.106470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:05.876 [2024-11-19 08:40:45.106480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.876 [2024-11-19 08:40:45.106489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:05.876 [2024-11-19 08:40:45.106499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:05.876 [2024-11-19 08:40:45.106508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.876 [2024-11-19 08:40:45.106518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:05.876 [2024-11-19 08:40:45.106527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:05.876 [2024-11-19 08:40:45.106536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.876 [2024-11-19 08:40:45.106546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:05.876 [2024-11-19 08:40:45.106555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:05.876 [2024-11-19 08:40:45.106565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.876 [2024-11-19 08:40:45.106575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:05.876 [2024-11-19 08:40:45.106584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:05.876 [2024-11-19 08:40:45.106593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:05.876 [2024-11-19 08:40:45.106603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:05.876 [2024-11-19 08:40:45.106612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:05.876 [2024-11-19 08:40:45.106622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:05.876 [2024-11-19 08:40:45.106631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:05.876 [2024-11-19 08:40:45.106641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:05.876 [2024-11-19 08:40:45.106665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.876 [2024-11-19 08:40:45.106678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:05.876 [2024-11-19 08:40:45.106688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:05.876 [2024-11-19 08:40:45.106698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.876 [2024-11-19 08:40:45.106709] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:05.876 [2024-11-19 08:40:45.106719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:05.876 [2024-11-19 08:40:45.106730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:05.876 [2024-11-19 08:40:45.106744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.876 [2024-11-19 08:40:45.106755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:05.876 [2024-11-19 08:40:45.106766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:05.876 [2024-11-19 08:40:45.106776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:05.876 [2024-11-19 08:40:45.106786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:05.876 [2024-11-19 08:40:45.106795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:05.876 [2024-11-19 08:40:45.106805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:05.876 [2024-11-19 08:40:45.106817] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:05.876 [2024-11-19 08:40:45.106830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:05.876 [2024-11-19 08:40:45.106842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:05.876 [2024-11-19 08:40:45.106852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:05.876 [2024-11-19 08:40:45.106863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:05.876 [2024-11-19 08:40:45.106873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:05.876 [2024-11-19 08:40:45.106883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:05.876 [2024-11-19 08:40:45.106894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:05.876 [2024-11-19 08:40:45.106904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:05.876 [2024-11-19 08:40:45.106914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:05.876 [2024-11-19 08:40:45.106925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:05.876 [2024-11-19 08:40:45.106935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:05.876 [2024-11-19 08:40:45.106967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:05.876 [2024-11-19 08:40:45.106986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:05.876 [2024-11-19 08:40:45.107006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:05.876 [2024-11-19 08:40:45.107026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:05.876 [2024-11-19 08:40:45.107044] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:05.876 [2024-11-19 08:40:45.107058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:05.876 [2024-11-19 08:40:45.107080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:05.876 [2024-11-19 08:40:45.107091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:05.876 [2024-11-19 08:40:45.107102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:05.876 [2024-11-19 08:40:45.107113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:05.876 [2024-11-19 08:40:45.107126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.876 [2024-11-19 08:40:45.107136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:05.876 [2024-11-19 08:40:45.107155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:22:05.876 [2024-11-19 08:40:45.107166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.876 [2024-11-19 08:40:45.139824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.876 [2024-11-19 08:40:45.139899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:05.876 [2024-11-19 08:40:45.139953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.586 ms 00:22:05.876 [2024-11-19 08:40:45.139975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.876 [2024-11-19 08:40:45.140160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.876 [2024-11-19 08:40:45.140188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:05.876 [2024-11-19 08:40:45.140201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:05.876 [2024-11-19 08:40:45.140212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.193902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.193975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.136 [2024-11-19 08:40:45.194011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.640 ms 00:22:06.136 [2024-11-19 08:40:45.194028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.194172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.194192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.136 [2024-11-19 08:40:45.194205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:06.136 [2024-11-19 08:40:45.194216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.194561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.194578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.136 [2024-11-19 08:40:45.194591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:22:06.136 [2024-11-19 08:40:45.194609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.194839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.194874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.136 [2024-11-19 08:40:45.194898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:22:06.136 [2024-11-19 08:40:45.194917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.211402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.211445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.136 [2024-11-19 08:40:45.211479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.431 ms 00:22:06.136 [2024-11-19 08:40:45.211491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.228441] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:06.136 [2024-11-19 08:40:45.228708] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:06.136 [2024-11-19 08:40:45.228734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.228747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:06.136 [2024-11-19 08:40:45.228760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.044 ms 00:22:06.136 [2024-11-19 08:40:45.228774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.260591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.260840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:06.136 [2024-11-19 08:40:45.260873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.710 ms 00:22:06.136 [2024-11-19 08:40:45.260886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.277385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.277431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:06.136 [2024-11-19 08:40:45.277448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.379 ms 00:22:06.136 [2024-11-19 08:40:45.277460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.292497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.292727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:06.136 [2024-11-19 08:40:45.292760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.942 ms 00:22:06.136 [2024-11-19 08:40:45.292773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.293628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.293678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:06.136 [2024-11-19 08:40:45.293696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:22:06.136 [2024-11-19 08:40:45.293708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.366025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.366098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:06.136 [2024-11-19 08:40:45.366136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.281 ms 00:22:06.136 [2024-11-19 08:40:45.366148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.379405] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:06.136 [2024-11-19 08:40:45.392905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.393001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:06.136 [2024-11-19 08:40:45.393042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.607 ms 00:22:06.136 [2024-11-19 08:40:45.393055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.393238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.393266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:06.136 [2024-11-19 08:40:45.393281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:06.136 [2024-11-19 08:40:45.393292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.393394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.393416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:06.136 [2024-11-19 08:40:45.393429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:06.136 [2024-11-19 08:40:45.393440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.393485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.393504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:06.136 [2024-11-19 08:40:45.393516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:06.136 [2024-11-19 08:40:45.393526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.393587] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:06.136 [2024-11-19 08:40:45.393654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.393686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:06.136 [2024-11-19 08:40:45.393698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:06.136 [2024-11-19 08:40:45.393709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.423991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.136 [2024-11-19 08:40:45.424226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:06.136 [2024-11-19 08:40:45.424256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.235 ms 00:22:06.136 [2024-11-19 08:40:45.424270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.136 [2024-11-19 08:40:45.424466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.137 [2024-11-19 08:40:45.424495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:06.137 [2024-11-19 08:40:45.424509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:06.137 [2024-11-19 08:40:45.424522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.137 [2024-11-19 08:40:45.425793] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:06.395 [2024-11-19 08:40:45.430325] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 352.301 ms, result 0 00:22:06.395 [2024-11-19 08:40:45.431179] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:06.395 [2024-11-19 08:40:45.447513] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:07.331  [2024-11-19T08:40:47.563Z] Copying: 27/256 [MB] (27 MBps) [2024-11-19T08:40:48.938Z] Copying: 50/256 [MB] (23 MBps) [2024-11-19T08:40:49.872Z] Copying: 75/256 [MB] (24 MBps) [2024-11-19T08:40:50.808Z] Copying: 100/256 [MB] (25 MBps) [2024-11-19T08:40:51.741Z] Copying: 124/256 [MB] (23 MBps) [2024-11-19T08:40:52.675Z] Copying: 149/256 [MB] (25 MBps) [2024-11-19T08:40:53.608Z] Copying: 174/256 [MB] (24 MBps) [2024-11-19T08:40:54.542Z] Copying: 197/256 [MB] (23 MBps) [2024-11-19T08:40:55.916Z] Copying: 223/256 [MB] (25 MBps) [2024-11-19T08:40:55.916Z] Copying: 247/256 [MB] (24 MBps) [2024-11-19T08:40:56.173Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-19 08:40:56.061974] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:16.877 [2024-11-19 08:40:56.076432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.877 [2024-11-19 08:40:56.076750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:16.877 [2024-11-19 08:40:56.076960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:16.877 [2024-11-19 08:40:56.077172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.877 [2024-11-19 08:40:56.077259] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:16.877 [2024-11-19 08:40:56.081670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.877 [2024-11-19 08:40:56.081717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:16.877 [2024-11-19 08:40:56.081735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.366 ms 00:22:16.877 [2024-11-19 08:40:56.081748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.877 [2024-11-19 08:40:56.082056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.877 [2024-11-19 08:40:56.082081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:16.877 [2024-11-19 08:40:56.082095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:22:16.877 [2024-11-19 08:40:56.082106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.877 [2024-11-19 08:40:56.086083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.877 [2024-11-19 08:40:56.086263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:16.877 [2024-11-19 08:40:56.086414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.952 ms 00:22:16.877 [2024-11-19 08:40:56.086469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.877 [2024-11-19 08:40:56.094575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.878 [2024-11-19 08:40:56.094759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:16.878 [2024-11-19 08:40:56.094791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.928 ms 00:22:16.878 [2024-11-19 08:40:56.094805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.878 [2024-11-19 08:40:56.126274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.878 [2024-11-19 08:40:56.126320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:16.878 [2024-11-19 08:40:56.126339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.405 ms 00:22:16.878 [2024-11-19 08:40:56.126352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.878 [2024-11-19 08:40:56.144065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.878 [2024-11-19 08:40:56.144117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:16.878 [2024-11-19 08:40:56.144135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.645 ms 00:22:16.878 [2024-11-19 08:40:56.144153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.878 [2024-11-19 08:40:56.144342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.878 [2024-11-19 08:40:56.144370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:16.878 [2024-11-19 08:40:56.144384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:22:16.878 [2024-11-19 08:40:56.144396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.137 [2024-11-19 08:40:56.177044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.137 [2024-11-19 08:40:56.177217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:17.137 [2024-11-19 08:40:56.177248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.607 ms 00:22:17.137 [2024-11-19 08:40:56.177260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.137 [2024-11-19 08:40:56.208374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.137 [2024-11-19 08:40:56.208535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:17.137 [2024-11-19 08:40:56.208564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.039 ms 00:22:17.137 [2024-11-19 08:40:56.208584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.137 [2024-11-19 08:40:56.239486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.137 [2024-11-19 08:40:56.239669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:17.137 [2024-11-19 08:40:56.239698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.813 ms 00:22:17.137 [2024-11-19 08:40:56.239711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.137 [2024-11-19 08:40:56.270603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.137 [2024-11-19 08:40:56.270784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:17.137 [2024-11-19 08:40:56.270813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.786 ms 00:22:17.137 [2024-11-19 08:40:56.270826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.137 [2024-11-19 08:40:56.270929] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:17.137 [2024-11-19 08:40:56.270958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:17.137 [2024-11-19 08:40:56.270978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:17.137 [2024-11-19 08:40:56.270989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:17.137 [2024-11-19 08:40:56.271001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:17.137 [2024-11-19 08:40:56.271013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:17.137 [2024-11-19 08:40:56.271025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:17.137 [2024-11-19 08:40:56.271036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:17.137 [2024-11-19 08:40:56.271048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:17.137 [2024-11-19 08:40:56.271059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:17.137 [2024-11-19 08:40:56.271071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:17.137 [2024-11-19 08:40:56.271083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:17.137 [2024-11-19 08:40:56.271095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:17.137 [2024-11-19 08:40:56.271106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:17.137 [2024-11-19 08:40:56.271118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:17.138 [2024-11-19 08:40:56.271965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.271976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.271998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:17.139 [2024-11-19 08:40:56.272206] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:17.139 [2024-11-19 08:40:56.272217] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e7b4f20c-cbe0-42b7-a2fd-ee64b862d20d 00:22:17.139 [2024-11-19 08:40:56.272229] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:17.139 [2024-11-19 08:40:56.272240] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:17.139 [2024-11-19 08:40:56.272251] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:17.139 [2024-11-19 08:40:56.272262] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:17.139 [2024-11-19 08:40:56.272273] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:17.139 [2024-11-19 08:40:56.272284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:17.139 [2024-11-19 08:40:56.272295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:17.139 [2024-11-19 08:40:56.272305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:17.139 [2024-11-19 08:40:56.272315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:17.139 [2024-11-19 08:40:56.272326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.139 [2024-11-19 08:40:56.272344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:17.139 [2024-11-19 08:40:56.272357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.399 ms 00:22:17.139 [2024-11-19 08:40:56.272368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.139 [2024-11-19 08:40:56.289116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.139 [2024-11-19 08:40:56.289272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:17.139 [2024-11-19 08:40:56.289300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.720 ms 00:22:17.139 [2024-11-19 08:40:56.289315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.139 [2024-11-19 08:40:56.289801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:17.139 [2024-11-19 08:40:56.289830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:17.139 [2024-11-19 08:40:56.289844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:22:17.139 [2024-11-19 08:40:56.289856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.139 [2024-11-19 08:40:56.336508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.139 [2024-11-19 08:40:56.336708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:17.139 [2024-11-19 08:40:56.336738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.139 [2024-11-19 08:40:56.336753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.139 [2024-11-19 08:40:56.336862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.139 [2024-11-19 08:40:56.336881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:17.139 [2024-11-19 08:40:56.336893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.139 [2024-11-19 08:40:56.336904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.139 [2024-11-19 08:40:56.336970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.139 [2024-11-19 08:40:56.336990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:17.139 [2024-11-19 08:40:56.337003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.139 [2024-11-19 08:40:56.337014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.139 [2024-11-19 08:40:56.337040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.139 [2024-11-19 08:40:56.337061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:17.139 [2024-11-19 08:40:56.337073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.139 [2024-11-19 08:40:56.337084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.398 [2024-11-19 08:40:56.440127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.398 [2024-11-19 08:40:56.440376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:17.398 [2024-11-19 08:40:56.440407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.398 [2024-11-19 08:40:56.440420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.398 [2024-11-19 08:40:56.526249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.398 [2024-11-19 08:40:56.526322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:17.398 [2024-11-19 08:40:56.526343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.398 [2024-11-19 08:40:56.526355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.398 [2024-11-19 08:40:56.526446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.398 [2024-11-19 08:40:56.526464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:17.398 [2024-11-19 08:40:56.526477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.398 [2024-11-19 08:40:56.526489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.398 [2024-11-19 08:40:56.526524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.398 [2024-11-19 08:40:56.526538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:17.398 [2024-11-19 08:40:56.526557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.398 [2024-11-19 08:40:56.526568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.398 [2024-11-19 08:40:56.526731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.398 [2024-11-19 08:40:56.526754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:17.398 [2024-11-19 08:40:56.526767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.398 [2024-11-19 08:40:56.526778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.398 [2024-11-19 08:40:56.526833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.398 [2024-11-19 08:40:56.526851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:17.398 [2024-11-19 08:40:56.526869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.398 [2024-11-19 08:40:56.526887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.398 [2024-11-19 08:40:56.526936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.398 [2024-11-19 08:40:56.526951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:17.398 [2024-11-19 08:40:56.526963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.398 [2024-11-19 08:40:56.526975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.398 [2024-11-19 08:40:56.527029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:17.398 [2024-11-19 08:40:56.527046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:17.398 [2024-11-19 08:40:56.527065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:17.398 [2024-11-19 08:40:56.527076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:17.398 [2024-11-19 08:40:56.527245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 450.845 ms, result 0 00:22:18.333 00:22:18.333 00:22:18.333 08:40:57 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:18.899 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:18.899 08:40:58 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:18.899 08:40:58 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:18.899 08:40:58 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:18.899 08:40:58 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:18.899 08:40:58 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:18.899 08:40:58 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:18.899 Process with pid 76330 is not found 00:22:18.899 08:40:58 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76330 00:22:18.899 08:40:58 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76330 ']' 00:22:18.899 08:40:58 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76330 00:22:18.899 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76330) - No such process 00:22:18.899 08:40:58 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76330 is not found' 00:22:18.899 ************************************ 00:22:18.899 END TEST ftl_trim 00:22:18.899 ************************************ 00:22:18.899 00:22:18.899 real 1m9.080s 00:22:18.899 user 1m35.784s 00:22:18.899 sys 0m7.088s 00:22:18.899 08:40:58 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.899 08:40:58 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:18.899 08:40:58 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:18.899 08:40:58 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:18.899 08:40:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.899 08:40:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:19.158 ************************************ 00:22:19.158 START TEST ftl_restore 00:22:19.158 ************************************ 00:22:19.158 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:19.158 * Looking for test storage... 00:22:19.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:19.158 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:19.158 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:22:19.158 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:19.158 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.158 08:40:58 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:22:19.158 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.158 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:19.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.158 --rc genhtml_branch_coverage=1 00:22:19.158 --rc genhtml_function_coverage=1 00:22:19.158 --rc genhtml_legend=1 00:22:19.158 --rc geninfo_all_blocks=1 00:22:19.158 --rc geninfo_unexecuted_blocks=1 00:22:19.158 00:22:19.158 ' 00:22:19.158 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:19.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.158 --rc genhtml_branch_coverage=1 00:22:19.158 --rc genhtml_function_coverage=1 00:22:19.158 --rc genhtml_legend=1 00:22:19.158 --rc geninfo_all_blocks=1 00:22:19.158 --rc geninfo_unexecuted_blocks=1 00:22:19.158 00:22:19.158 ' 00:22:19.158 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:19.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.158 --rc genhtml_branch_coverage=1 00:22:19.158 --rc genhtml_function_coverage=1 00:22:19.158 --rc genhtml_legend=1 00:22:19.159 --rc geninfo_all_blocks=1 00:22:19.159 --rc geninfo_unexecuted_blocks=1 00:22:19.159 00:22:19.159 ' 00:22:19.159 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:19.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.159 --rc genhtml_branch_coverage=1 00:22:19.159 --rc genhtml_function_coverage=1 00:22:19.159 --rc genhtml_legend=1 00:22:19.159 --rc geninfo_all_blocks=1 00:22:19.159 --rc geninfo_unexecuted_blocks=1 00:22:19.159 00:22:19.159 ' 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.SwjUE2IcVo 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76599 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:19.159 08:40:58 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76599 00:22:19.159 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 76599 ']' 00:22:19.159 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.159 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.159 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.159 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.159 08:40:58 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:19.417 [2024-11-19 08:40:58.528110] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:19.417 [2024-11-19 08:40:58.528419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76599 ] 00:22:19.417 [2024-11-19 08:40:58.707562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.676 [2024-11-19 08:40:58.842942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.610 08:40:59 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.610 08:40:59 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:22:20.610 08:40:59 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:20.610 08:40:59 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:20.610 08:40:59 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:20.610 08:40:59 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:20.610 08:40:59 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:20.610 08:40:59 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:20.868 08:40:59 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:20.868 08:40:59 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:20.869 08:40:59 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:20.869 08:40:59 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:20.869 08:40:59 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:20.869 08:40:59 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:20.869 08:40:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:20.869 08:40:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:21.127 08:41:00 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:21.127 { 00:22:21.127 "name": "nvme0n1", 00:22:21.127 "aliases": [ 00:22:21.127 "4e48da0c-c056-4344-9536-4e2292bb869f" 00:22:21.127 ], 00:22:21.127 "product_name": "NVMe disk", 00:22:21.127 "block_size": 4096, 00:22:21.127 "num_blocks": 1310720, 00:22:21.127 "uuid": "4e48da0c-c056-4344-9536-4e2292bb869f", 00:22:21.127 "numa_id": -1, 00:22:21.127 "assigned_rate_limits": { 00:22:21.127 "rw_ios_per_sec": 0, 00:22:21.127 "rw_mbytes_per_sec": 0, 00:22:21.127 "r_mbytes_per_sec": 0, 00:22:21.127 "w_mbytes_per_sec": 0 00:22:21.127 }, 00:22:21.127 "claimed": true, 00:22:21.127 "claim_type": "read_many_write_one", 00:22:21.127 "zoned": false, 00:22:21.127 "supported_io_types": { 00:22:21.127 "read": true, 00:22:21.127 "write": true, 00:22:21.127 "unmap": true, 00:22:21.127 "flush": true, 00:22:21.127 "reset": true, 00:22:21.127 "nvme_admin": true, 00:22:21.127 "nvme_io": true, 00:22:21.127 "nvme_io_md": false, 00:22:21.127 "write_zeroes": true, 00:22:21.127 "zcopy": false, 00:22:21.127 "get_zone_info": false, 00:22:21.127 "zone_management": false, 00:22:21.127 "zone_append": false, 00:22:21.127 "compare": true, 00:22:21.127 "compare_and_write": false, 00:22:21.127 "abort": true, 00:22:21.127 "seek_hole": false, 00:22:21.127 "seek_data": false, 00:22:21.127 "copy": true, 00:22:21.127 "nvme_iov_md": false 00:22:21.127 }, 00:22:21.127 "driver_specific": { 00:22:21.127 "nvme": [ 00:22:21.127 { 00:22:21.127 "pci_address": "0000:00:11.0", 00:22:21.127 "trid": { 00:22:21.127 "trtype": "PCIe", 00:22:21.128 "traddr": "0000:00:11.0" 00:22:21.128 }, 00:22:21.128 "ctrlr_data": { 00:22:21.128 "cntlid": 0, 00:22:21.128 "vendor_id": "0x1b36", 00:22:21.128 "model_number": "QEMU NVMe Ctrl", 00:22:21.128 "serial_number": "12341", 00:22:21.128 "firmware_revision": "8.0.0", 00:22:21.128 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:21.128 "oacs": { 00:22:21.128 "security": 0, 00:22:21.128 "format": 1, 00:22:21.128 "firmware": 0, 00:22:21.128 "ns_manage": 1 00:22:21.128 }, 00:22:21.128 "multi_ctrlr": false, 00:22:21.128 "ana_reporting": false 00:22:21.128 }, 00:22:21.128 "vs": { 00:22:21.128 "nvme_version": "1.4" 00:22:21.128 }, 00:22:21.128 "ns_data": { 00:22:21.128 "id": 1, 00:22:21.128 "can_share": false 00:22:21.128 } 00:22:21.128 } 00:22:21.128 ], 00:22:21.128 "mp_policy": "active_passive" 00:22:21.128 } 00:22:21.128 } 00:22:21.128 ]' 00:22:21.128 08:41:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:21.128 08:41:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:21.128 08:41:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:21.128 08:41:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:21.128 08:41:00 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:21.128 08:41:00 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:22:21.128 08:41:00 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:21.128 08:41:00 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:21.128 08:41:00 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:21.128 08:41:00 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:21.128 08:41:00 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:21.695 08:41:00 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=a9e5c0c0-16ef-432f-be64-f8bd6a8fe532 00:22:21.695 08:41:00 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:21.695 08:41:00 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a9e5c0c0-16ef-432f-be64-f8bd6a8fe532 00:22:21.954 08:41:01 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:22.213 08:41:01 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=80e04b89-5c13-4dee-a73c-10304ba60cd1 00:22:22.213 08:41:01 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 80e04b89-5c13-4dee-a73c-10304ba60cd1 00:22:22.472 08:41:01 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=7cedcaa6-40be-4380-adf4-a705357c84a5 00:22:22.472 08:41:01 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:22.472 08:41:01 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7cedcaa6-40be-4380-adf4-a705357c84a5 00:22:22.472 08:41:01 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:22.472 08:41:01 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:22.472 08:41:01 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=7cedcaa6-40be-4380-adf4-a705357c84a5 00:22:22.472 08:41:01 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:22.472 08:41:01 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 7cedcaa6-40be-4380-adf4-a705357c84a5 00:22:22.472 08:41:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7cedcaa6-40be-4380-adf4-a705357c84a5 00:22:22.472 08:41:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:22.472 08:41:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:22.472 08:41:01 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:22.472 08:41:01 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7cedcaa6-40be-4380-adf4-a705357c84a5 00:22:22.731 08:41:01 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:22.731 { 00:22:22.731 "name": "7cedcaa6-40be-4380-adf4-a705357c84a5", 00:22:22.731 "aliases": [ 00:22:22.731 "lvs/nvme0n1p0" 00:22:22.731 ], 00:22:22.731 "product_name": "Logical Volume", 00:22:22.731 "block_size": 4096, 00:22:22.731 "num_blocks": 26476544, 00:22:22.731 "uuid": "7cedcaa6-40be-4380-adf4-a705357c84a5", 00:22:22.731 "assigned_rate_limits": { 00:22:22.731 "rw_ios_per_sec": 0, 00:22:22.731 "rw_mbytes_per_sec": 0, 00:22:22.731 "r_mbytes_per_sec": 0, 00:22:22.731 "w_mbytes_per_sec": 0 00:22:22.731 }, 00:22:22.732 "claimed": false, 00:22:22.732 "zoned": false, 00:22:22.732 "supported_io_types": { 00:22:22.732 "read": true, 00:22:22.732 "write": true, 00:22:22.732 "unmap": true, 00:22:22.732 "flush": false, 00:22:22.732 "reset": true, 00:22:22.732 "nvme_admin": false, 00:22:22.732 "nvme_io": false, 00:22:22.732 "nvme_io_md": false, 00:22:22.732 "write_zeroes": true, 00:22:22.732 "zcopy": false, 00:22:22.732 "get_zone_info": false, 00:22:22.732 "zone_management": false, 00:22:22.732 "zone_append": false, 00:22:22.732 "compare": false, 00:22:22.732 "compare_and_write": false, 00:22:22.732 "abort": false, 00:22:22.732 "seek_hole": true, 00:22:22.732 "seek_data": true, 00:22:22.732 "copy": false, 00:22:22.732 "nvme_iov_md": false 00:22:22.732 }, 00:22:22.732 "driver_specific": { 00:22:22.732 "lvol": { 00:22:22.732 "lvol_store_uuid": "80e04b89-5c13-4dee-a73c-10304ba60cd1", 00:22:22.732 "base_bdev": "nvme0n1", 00:22:22.732 "thin_provision": true, 00:22:22.732 "num_allocated_clusters": 0, 00:22:22.732 "snapshot": false, 00:22:22.732 "clone": false, 00:22:22.732 "esnap_clone": false 00:22:22.732 } 00:22:22.732 } 00:22:22.732 } 00:22:22.732 ]' 00:22:22.732 08:41:01 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:22.732 08:41:01 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:22.732 08:41:01 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:22.732 08:41:01 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:22.732 08:41:01 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:22.732 08:41:01 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:22.732 08:41:01 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:22.732 08:41:01 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:22.732 08:41:01 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:23.340 08:41:02 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:23.340 08:41:02 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:23.340 08:41:02 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 7cedcaa6-40be-4380-adf4-a705357c84a5 00:22:23.340 08:41:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7cedcaa6-40be-4380-adf4-a705357c84a5 00:22:23.340 08:41:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:23.340 08:41:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:23.340 08:41:02 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:23.340 08:41:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7cedcaa6-40be-4380-adf4-a705357c84a5 00:22:23.340 08:41:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:23.340 { 00:22:23.340 "name": "7cedcaa6-40be-4380-adf4-a705357c84a5", 00:22:23.340 "aliases": [ 00:22:23.340 "lvs/nvme0n1p0" 00:22:23.340 ], 00:22:23.340 "product_name": "Logical Volume", 00:22:23.340 "block_size": 4096, 00:22:23.340 "num_blocks": 26476544, 00:22:23.340 "uuid": "7cedcaa6-40be-4380-adf4-a705357c84a5", 00:22:23.340 "assigned_rate_limits": { 00:22:23.340 "rw_ios_per_sec": 0, 00:22:23.340 "rw_mbytes_per_sec": 0, 00:22:23.340 "r_mbytes_per_sec": 0, 00:22:23.340 "w_mbytes_per_sec": 0 00:22:23.340 }, 00:22:23.340 "claimed": false, 00:22:23.340 "zoned": false, 00:22:23.340 "supported_io_types": { 00:22:23.340 "read": true, 00:22:23.340 "write": true, 00:22:23.340 "unmap": true, 00:22:23.340 "flush": false, 00:22:23.340 "reset": true, 00:22:23.340 "nvme_admin": false, 00:22:23.340 "nvme_io": false, 00:22:23.340 "nvme_io_md": false, 00:22:23.340 "write_zeroes": true, 00:22:23.340 "zcopy": false, 00:22:23.340 "get_zone_info": false, 00:22:23.340 "zone_management": false, 00:22:23.340 "zone_append": false, 00:22:23.340 "compare": false, 00:22:23.340 "compare_and_write": false, 00:22:23.340 "abort": false, 00:22:23.340 "seek_hole": true, 00:22:23.340 "seek_data": true, 00:22:23.340 "copy": false, 00:22:23.340 "nvme_iov_md": false 00:22:23.340 }, 00:22:23.340 "driver_specific": { 00:22:23.340 "lvol": { 00:22:23.340 "lvol_store_uuid": "80e04b89-5c13-4dee-a73c-10304ba60cd1", 00:22:23.340 "base_bdev": "nvme0n1", 00:22:23.340 "thin_provision": true, 00:22:23.340 "num_allocated_clusters": 0, 00:22:23.340 "snapshot": false, 00:22:23.340 "clone": false, 00:22:23.340 "esnap_clone": false 00:22:23.340 } 00:22:23.340 } 00:22:23.340 } 00:22:23.340 ]' 00:22:23.340 08:41:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:23.604 08:41:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:23.604 08:41:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:23.604 08:41:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:23.604 08:41:02 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:23.604 08:41:02 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:23.604 08:41:02 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:23.604 08:41:02 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:23.863 08:41:03 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:23.863 08:41:03 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 7cedcaa6-40be-4380-adf4-a705357c84a5 00:22:23.863 08:41:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7cedcaa6-40be-4380-adf4-a705357c84a5 00:22:23.863 08:41:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:23.863 08:41:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:23.863 08:41:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:23.863 08:41:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7cedcaa6-40be-4380-adf4-a705357c84a5 00:22:24.121 08:41:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:24.121 { 00:22:24.121 "name": "7cedcaa6-40be-4380-adf4-a705357c84a5", 00:22:24.121 "aliases": [ 00:22:24.121 "lvs/nvme0n1p0" 00:22:24.122 ], 00:22:24.122 "product_name": "Logical Volume", 00:22:24.122 "block_size": 4096, 00:22:24.122 "num_blocks": 26476544, 00:22:24.122 "uuid": "7cedcaa6-40be-4380-adf4-a705357c84a5", 00:22:24.122 "assigned_rate_limits": { 00:22:24.122 "rw_ios_per_sec": 0, 00:22:24.122 "rw_mbytes_per_sec": 0, 00:22:24.122 "r_mbytes_per_sec": 0, 00:22:24.122 "w_mbytes_per_sec": 0 00:22:24.122 }, 00:22:24.122 "claimed": false, 00:22:24.122 "zoned": false, 00:22:24.122 "supported_io_types": { 00:22:24.122 "read": true, 00:22:24.122 "write": true, 00:22:24.122 "unmap": true, 00:22:24.122 "flush": false, 00:22:24.122 "reset": true, 00:22:24.122 "nvme_admin": false, 00:22:24.122 "nvme_io": false, 00:22:24.122 "nvme_io_md": false, 00:22:24.122 "write_zeroes": true, 00:22:24.122 "zcopy": false, 00:22:24.122 "get_zone_info": false, 00:22:24.122 "zone_management": false, 00:22:24.122 "zone_append": false, 00:22:24.122 "compare": false, 00:22:24.122 "compare_and_write": false, 00:22:24.122 "abort": false, 00:22:24.122 "seek_hole": true, 00:22:24.122 "seek_data": true, 00:22:24.122 "copy": false, 00:22:24.122 "nvme_iov_md": false 00:22:24.122 }, 00:22:24.122 "driver_specific": { 00:22:24.122 "lvol": { 00:22:24.122 "lvol_store_uuid": "80e04b89-5c13-4dee-a73c-10304ba60cd1", 00:22:24.122 "base_bdev": "nvme0n1", 00:22:24.122 "thin_provision": true, 00:22:24.122 "num_allocated_clusters": 0, 00:22:24.122 "snapshot": false, 00:22:24.122 "clone": false, 00:22:24.122 "esnap_clone": false 00:22:24.122 } 00:22:24.122 } 00:22:24.122 } 00:22:24.122 ]' 00:22:24.122 08:41:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:24.122 08:41:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:24.122 08:41:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:24.381 08:41:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:24.381 08:41:03 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:24.381 08:41:03 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:24.381 08:41:03 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:24.381 08:41:03 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7cedcaa6-40be-4380-adf4-a705357c84a5 --l2p_dram_limit 10' 00:22:24.381 08:41:03 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:24.381 08:41:03 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:24.381 08:41:03 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:24.381 08:41:03 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:24.381 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:24.381 08:41:03 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7cedcaa6-40be-4380-adf4-a705357c84a5 --l2p_dram_limit 10 -c nvc0n1p0 00:22:24.640 [2024-11-19 08:41:03.711970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.641 [2024-11-19 08:41:03.712190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:24.641 [2024-11-19 08:41:03.712234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:24.641 [2024-11-19 08:41:03.712256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.641 [2024-11-19 08:41:03.712366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.641 [2024-11-19 08:41:03.712387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:24.641 [2024-11-19 08:41:03.712403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:24.641 [2024-11-19 08:41:03.712417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.641 [2024-11-19 08:41:03.712469] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:24.641 [2024-11-19 08:41:03.713523] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:24.641 [2024-11-19 08:41:03.713576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.641 [2024-11-19 08:41:03.713593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:24.641 [2024-11-19 08:41:03.713622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:22:24.641 [2024-11-19 08:41:03.713638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.641 [2024-11-19 08:41:03.713789] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 940fa22b-a2a5-4996-9f72-ddb7245a8d43 00:22:24.641 [2024-11-19 08:41:03.714873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.641 [2024-11-19 08:41:03.714921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:24.641 [2024-11-19 08:41:03.714939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:24.641 [2024-11-19 08:41:03.714955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.641 [2024-11-19 08:41:03.719677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.641 [2024-11-19 08:41:03.719728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:24.641 [2024-11-19 08:41:03.719749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.660 ms 00:22:24.641 [2024-11-19 08:41:03.719764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.641 [2024-11-19 08:41:03.719915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.641 [2024-11-19 08:41:03.719939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:24.641 [2024-11-19 08:41:03.719953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:22:24.641 [2024-11-19 08:41:03.719972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.641 [2024-11-19 08:41:03.720059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.641 [2024-11-19 08:41:03.720083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:24.641 [2024-11-19 08:41:03.720097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:24.641 [2024-11-19 08:41:03.720114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.641 [2024-11-19 08:41:03.720146] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:24.641 [2024-11-19 08:41:03.724787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.641 [2024-11-19 08:41:03.724831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:24.641 [2024-11-19 08:41:03.724868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.647 ms 00:22:24.641 [2024-11-19 08:41:03.724881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.641 [2024-11-19 08:41:03.724928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.641 [2024-11-19 08:41:03.724944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:24.641 [2024-11-19 08:41:03.724975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:24.641 [2024-11-19 08:41:03.724986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.641 [2024-11-19 08:41:03.725037] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:24.641 [2024-11-19 08:41:03.725198] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:24.641 [2024-11-19 08:41:03.725222] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:24.641 [2024-11-19 08:41:03.725238] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:24.641 [2024-11-19 08:41:03.725256] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:24.641 [2024-11-19 08:41:03.725271] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:24.641 [2024-11-19 08:41:03.725286] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:24.641 [2024-11-19 08:41:03.725298] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:24.641 [2024-11-19 08:41:03.725315] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:24.641 [2024-11-19 08:41:03.725327] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:24.641 [2024-11-19 08:41:03.725342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.641 [2024-11-19 08:41:03.725369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:24.641 [2024-11-19 08:41:03.725407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:22:24.641 [2024-11-19 08:41:03.725432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.641 [2024-11-19 08:41:03.725527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.641 [2024-11-19 08:41:03.725543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:24.641 [2024-11-19 08:41:03.725557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:24.641 [2024-11-19 08:41:03.725569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.641 [2024-11-19 08:41:03.725707] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:24.641 [2024-11-19 08:41:03.725730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:24.641 [2024-11-19 08:41:03.725745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.641 [2024-11-19 08:41:03.725773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.641 [2024-11-19 08:41:03.725787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:24.641 [2024-11-19 08:41:03.725798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:24.641 [2024-11-19 08:41:03.725811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:24.641 [2024-11-19 08:41:03.725823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:24.641 [2024-11-19 08:41:03.725836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:24.641 [2024-11-19 08:41:03.725847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.641 [2024-11-19 08:41:03.725860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:24.641 [2024-11-19 08:41:03.725871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:24.641 [2024-11-19 08:41:03.725884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.641 [2024-11-19 08:41:03.725895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:24.641 [2024-11-19 08:41:03.725908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:24.641 [2024-11-19 08:41:03.725918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.641 [2024-11-19 08:41:03.725944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:24.641 [2024-11-19 08:41:03.725972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:24.641 [2024-11-19 08:41:03.725986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.641 [2024-11-19 08:41:03.725997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:24.641 [2024-11-19 08:41:03.726010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:24.641 [2024-11-19 08:41:03.726028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.641 [2024-11-19 08:41:03.726047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:24.641 [2024-11-19 08:41:03.726059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:24.641 [2024-11-19 08:41:03.726074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.641 [2024-11-19 08:41:03.726092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:24.641 [2024-11-19 08:41:03.726118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:24.641 [2024-11-19 08:41:03.726140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.642 [2024-11-19 08:41:03.726158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:24.642 [2024-11-19 08:41:03.726170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:24.642 [2024-11-19 08:41:03.726185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.642 [2024-11-19 08:41:03.726204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:24.642 [2024-11-19 08:41:03.726228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:24.642 [2024-11-19 08:41:03.726240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.642 [2024-11-19 08:41:03.726254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:24.642 [2024-11-19 08:41:03.726268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:24.642 [2024-11-19 08:41:03.726292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.642 [2024-11-19 08:41:03.726314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:24.642 [2024-11-19 08:41:03.726332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:24.642 [2024-11-19 08:41:03.726344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.642 [2024-11-19 08:41:03.726357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:24.642 [2024-11-19 08:41:03.726370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:24.642 [2024-11-19 08:41:03.726389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.642 [2024-11-19 08:41:03.726402] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:24.642 [2024-11-19 08:41:03.726431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:24.642 [2024-11-19 08:41:03.726454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.642 [2024-11-19 08:41:03.726469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.642 [2024-11-19 08:41:03.726487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:24.642 [2024-11-19 08:41:03.726513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:24.642 [2024-11-19 08:41:03.726526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:24.642 [2024-11-19 08:41:03.726540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:24.642 [2024-11-19 08:41:03.726551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:24.642 [2024-11-19 08:41:03.726565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:24.642 [2024-11-19 08:41:03.726588] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:24.642 [2024-11-19 08:41:03.726608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.642 [2024-11-19 08:41:03.726654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:24.642 [2024-11-19 08:41:03.726680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:24.642 [2024-11-19 08:41:03.726693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:24.642 [2024-11-19 08:41:03.726707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:24.642 [2024-11-19 08:41:03.726725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:24.642 [2024-11-19 08:41:03.726745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:24.642 [2024-11-19 08:41:03.726767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:24.642 [2024-11-19 08:41:03.726792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:24.642 [2024-11-19 08:41:03.726805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:24.642 [2024-11-19 08:41:03.726830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:24.642 [2024-11-19 08:41:03.726849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:24.642 [2024-11-19 08:41:03.726866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:24.642 [2024-11-19 08:41:03.726878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:24.642 [2024-11-19 08:41:03.726894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:24.642 [2024-11-19 08:41:03.726911] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:24.642 [2024-11-19 08:41:03.726933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.642 [2024-11-19 08:41:03.726956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:24.642 [2024-11-19 08:41:03.726979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:24.642 [2024-11-19 08:41:03.726993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:24.642 [2024-11-19 08:41:03.727012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:24.642 [2024-11-19 08:41:03.727030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.642 [2024-11-19 08:41:03.727055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:24.642 [2024-11-19 08:41:03.727070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:22:24.642 [2024-11-19 08:41:03.727093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.642 [2024-11-19 08:41:03.727175] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:24.642 [2024-11-19 08:41:03.727207] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:26.544 [2024-11-19 08:41:05.781209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.544 [2024-11-19 08:41:05.781286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:26.544 [2024-11-19 08:41:05.781342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2054.045 ms 00:22:26.544 [2024-11-19 08:41:05.781357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.544 [2024-11-19 08:41:05.814399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.544 [2024-11-19 08:41:05.814486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:26.544 [2024-11-19 08:41:05.814508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.770 ms 00:22:26.544 [2024-11-19 08:41:05.814523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.544 [2024-11-19 08:41:05.814735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.544 [2024-11-19 08:41:05.814764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:26.544 [2024-11-19 08:41:05.814779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:26.544 [2024-11-19 08:41:05.814796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.803 [2024-11-19 08:41:05.855923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.803 [2024-11-19 08:41:05.856186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:26.803 [2024-11-19 08:41:05.856218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.063 ms 00:22:26.803 [2024-11-19 08:41:05.856236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.803 [2024-11-19 08:41:05.856294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.803 [2024-11-19 08:41:05.856320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:26.803 [2024-11-19 08:41:05.856334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:26.803 [2024-11-19 08:41:05.856348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.803 [2024-11-19 08:41:05.856810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.803 [2024-11-19 08:41:05.856837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:26.803 [2024-11-19 08:41:05.856852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:22:26.803 [2024-11-19 08:41:05.856866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.803 [2024-11-19 08:41:05.857004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.803 [2024-11-19 08:41:05.857024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:26.803 [2024-11-19 08:41:05.857039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:22:26.803 [2024-11-19 08:41:05.857056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.803 [2024-11-19 08:41:05.875239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.803 [2024-11-19 08:41:05.875296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:26.803 [2024-11-19 08:41:05.875346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.157 ms 00:22:26.804 [2024-11-19 08:41:05.875360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.804 [2024-11-19 08:41:05.889162] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:26.804 [2024-11-19 08:41:05.892124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.804 [2024-11-19 08:41:05.892161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:26.804 [2024-11-19 08:41:05.892199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.629 ms 00:22:26.804 [2024-11-19 08:41:05.892212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.804 [2024-11-19 08:41:05.969348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.804 [2024-11-19 08:41:05.969624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:26.804 [2024-11-19 08:41:05.969666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.090 ms 00:22:26.804 [2024-11-19 08:41:05.969682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.804 [2024-11-19 08:41:05.969915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.804 [2024-11-19 08:41:05.969949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:26.804 [2024-11-19 08:41:05.969969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:22:26.804 [2024-11-19 08:41:05.969982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.804 [2024-11-19 08:41:06.002210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.804 [2024-11-19 08:41:06.002255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:26.804 [2024-11-19 08:41:06.002295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.128 ms 00:22:26.804 [2024-11-19 08:41:06.002308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.804 [2024-11-19 08:41:06.033994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.804 [2024-11-19 08:41:06.034038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:26.804 [2024-11-19 08:41:06.034076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.611 ms 00:22:26.804 [2024-11-19 08:41:06.034089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:26.804 [2024-11-19 08:41:06.034860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:26.804 [2024-11-19 08:41:06.034886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:26.804 [2024-11-19 08:41:06.034904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:22:26.804 [2024-11-19 08:41:06.034916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.063 [2024-11-19 08:41:06.121438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.063 [2024-11-19 08:41:06.121502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:27.063 [2024-11-19 08:41:06.121547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.443 ms 00:22:27.063 [2024-11-19 08:41:06.121561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.063 [2024-11-19 08:41:06.154365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.063 [2024-11-19 08:41:06.154412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:27.063 [2024-11-19 08:41:06.154451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.658 ms 00:22:27.063 [2024-11-19 08:41:06.154463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.063 [2024-11-19 08:41:06.186670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.063 [2024-11-19 08:41:06.186720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:27.063 [2024-11-19 08:41:06.186758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.134 ms 00:22:27.063 [2024-11-19 08:41:06.186786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.063 [2024-11-19 08:41:06.219368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.063 [2024-11-19 08:41:06.219413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:27.063 [2024-11-19 08:41:06.219451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.530 ms 00:22:27.063 [2024-11-19 08:41:06.219464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.063 [2024-11-19 08:41:06.219548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.063 [2024-11-19 08:41:06.219569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:27.063 [2024-11-19 08:41:06.219588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:27.063 [2024-11-19 08:41:06.219600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.063 [2024-11-19 08:41:06.219744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.063 [2024-11-19 08:41:06.219766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:27.063 [2024-11-19 08:41:06.219785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:27.063 [2024-11-19 08:41:06.219797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.063 [2024-11-19 08:41:06.220921] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2508.356 ms, result 0 00:22:27.063 { 00:22:27.063 "name": "ftl0", 00:22:27.063 "uuid": "940fa22b-a2a5-4996-9f72-ddb7245a8d43" 00:22:27.063 } 00:22:27.063 08:41:06 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:27.063 08:41:06 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:27.322 08:41:06 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:27.322 08:41:06 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:27.581 [2024-11-19 08:41:06.820550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.581 [2024-11-19 08:41:06.820800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:27.581 [2024-11-19 08:41:06.820836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:27.581 [2024-11-19 08:41:06.820866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.581 [2024-11-19 08:41:06.820911] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:27.581 [2024-11-19 08:41:06.824280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.581 [2024-11-19 08:41:06.824440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:27.581 [2024-11-19 08:41:06.824474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:22:27.581 [2024-11-19 08:41:06.824488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.581 [2024-11-19 08:41:06.824848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.581 [2024-11-19 08:41:06.824871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:27.581 [2024-11-19 08:41:06.824891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:22:27.581 [2024-11-19 08:41:06.824903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.581 [2024-11-19 08:41:06.828202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.581 [2024-11-19 08:41:06.828246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:27.581 [2024-11-19 08:41:06.828266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.272 ms 00:22:27.581 [2024-11-19 08:41:06.828278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.581 [2024-11-19 08:41:06.835010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.581 [2024-11-19 08:41:06.835169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:27.581 [2024-11-19 08:41:06.835207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.701 ms 00:22:27.581 [2024-11-19 08:41:06.835231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.581 [2024-11-19 08:41:06.866953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.581 [2024-11-19 08:41:06.867011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:27.581 [2024-11-19 08:41:06.867049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.629 ms 00:22:27.581 [2024-11-19 08:41:06.867062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.842 [2024-11-19 08:41:06.886145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.842 [2024-11-19 08:41:06.886192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:27.842 [2024-11-19 08:41:06.886230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.024 ms 00:22:27.842 [2024-11-19 08:41:06.886243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.842 [2024-11-19 08:41:06.886437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.842 [2024-11-19 08:41:06.886460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:27.842 [2024-11-19 08:41:06.886477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:22:27.842 [2024-11-19 08:41:06.886489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.842 [2024-11-19 08:41:06.918349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.842 [2024-11-19 08:41:06.918393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:27.842 [2024-11-19 08:41:06.918414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.829 ms 00:22:27.842 [2024-11-19 08:41:06.918427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.842 [2024-11-19 08:41:06.949869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.842 [2024-11-19 08:41:06.949929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:27.842 [2024-11-19 08:41:06.949982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.387 ms 00:22:27.842 [2024-11-19 08:41:06.949995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.842 [2024-11-19 08:41:06.980365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.842 [2024-11-19 08:41:06.980581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:27.842 [2024-11-19 08:41:06.980646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.313 ms 00:22:27.842 [2024-11-19 08:41:06.980663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.842 [2024-11-19 08:41:07.013202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.842 [2024-11-19 08:41:07.013548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:27.842 [2024-11-19 08:41:07.013590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.385 ms 00:22:27.842 [2024-11-19 08:41:07.013645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.842 [2024-11-19 08:41:07.013770] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:27.842 [2024-11-19 08:41:07.013799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.013817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.013830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.013845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.013871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.013885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.013897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.013915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.013927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.013940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.013952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.013966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.013993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:27.842 [2024-11-19 08:41:07.014468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.014996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:27.843 [2024-11-19 08:41:07.015345] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:27.843 [2024-11-19 08:41:07.015363] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 940fa22b-a2a5-4996-9f72-ddb7245a8d43 00:22:27.843 [2024-11-19 08:41:07.015376] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:27.843 [2024-11-19 08:41:07.015392] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:27.843 [2024-11-19 08:41:07.015404] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:27.843 [2024-11-19 08:41:07.015422] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:27.843 [2024-11-19 08:41:07.015434] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:27.843 [2024-11-19 08:41:07.015448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:27.843 [2024-11-19 08:41:07.015460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:27.843 [2024-11-19 08:41:07.015473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:27.843 [2024-11-19 08:41:07.015483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:27.843 [2024-11-19 08:41:07.015498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.843 [2024-11-19 08:41:07.015511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:27.843 [2024-11-19 08:41:07.015539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.733 ms 00:22:27.843 [2024-11-19 08:41:07.015552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.843 [2024-11-19 08:41:07.034013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.843 [2024-11-19 08:41:07.034274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:27.843 [2024-11-19 08:41:07.034418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.325 ms 00:22:27.843 [2024-11-19 08:41:07.034544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.843 [2024-11-19 08:41:07.035094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.843 [2024-11-19 08:41:07.035231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:27.843 [2024-11-19 08:41:07.035347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:22:27.843 [2024-11-19 08:41:07.035404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.843 [2024-11-19 08:41:07.090079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.843 [2024-11-19 08:41:07.090396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:27.843 [2024-11-19 08:41:07.090538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.843 [2024-11-19 08:41:07.090590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.843 [2024-11-19 08:41:07.090839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.843 [2024-11-19 08:41:07.090989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:27.843 [2024-11-19 08:41:07.091132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.843 [2024-11-19 08:41:07.091183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.843 [2024-11-19 08:41:07.091406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.843 [2024-11-19 08:41:07.091579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:27.843 [2024-11-19 08:41:07.091716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.843 [2024-11-19 08:41:07.091828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.843 [2024-11-19 08:41:07.091951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.843 [2024-11-19 08:41:07.092032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:27.843 [2024-11-19 08:41:07.092144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.844 [2024-11-19 08:41:07.092192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.103 [2024-11-19 08:41:07.190436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.103 [2024-11-19 08:41:07.190790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:28.103 [2024-11-19 08:41:07.190921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.103 [2024-11-19 08:41:07.191043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.103 [2024-11-19 08:41:07.275583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.103 [2024-11-19 08:41:07.275802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:28.103 [2024-11-19 08:41:07.275971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.103 [2024-11-19 08:41:07.276000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.103 [2024-11-19 08:41:07.276150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.103 [2024-11-19 08:41:07.276172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:28.103 [2024-11-19 08:41:07.276188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.103 [2024-11-19 08:41:07.276200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.103 [2024-11-19 08:41:07.276276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.103 [2024-11-19 08:41:07.276294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:28.103 [2024-11-19 08:41:07.276310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.103 [2024-11-19 08:41:07.276322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.103 [2024-11-19 08:41:07.276466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.103 [2024-11-19 08:41:07.276487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:28.103 [2024-11-19 08:41:07.276502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.103 [2024-11-19 08:41:07.276514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.103 [2024-11-19 08:41:07.276574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.103 [2024-11-19 08:41:07.276599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:28.103 [2024-11-19 08:41:07.276780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.103 [2024-11-19 08:41:07.276836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.103 [2024-11-19 08:41:07.276923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.103 [2024-11-19 08:41:07.277182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:28.103 [2024-11-19 08:41:07.277241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.103 [2024-11-19 08:41:07.277282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.103 [2024-11-19 08:41:07.277449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.103 [2024-11-19 08:41:07.277523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:28.103 [2024-11-19 08:41:07.277755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.103 [2024-11-19 08:41:07.277780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.103 [2024-11-19 08:41:07.277959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 457.373 ms, result 0 00:22:28.103 true 00:22:28.103 08:41:07 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76599 00:22:28.103 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 76599 ']' 00:22:28.103 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 76599 00:22:28.103 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:22:28.103 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.103 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76599 00:22:28.103 killing process with pid 76599 00:22:28.103 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:28.103 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:28.103 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76599' 00:22:28.103 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 76599 00:22:28.103 08:41:07 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 76599 00:22:31.396 08:41:10 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:36.739 262144+0 records in 00:22:36.739 262144+0 records out 00:22:36.739 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.11132 s, 210 MB/s 00:22:36.739 08:41:15 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:38.642 08:41:17 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:38.642 [2024-11-19 08:41:17.691755] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:22:38.643 [2024-11-19 08:41:17.691934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76847 ] 00:22:38.643 [2024-11-19 08:41:17.891497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.901 [2024-11-19 08:41:18.021315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.160 [2024-11-19 08:41:18.359247] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:39.160 [2024-11-19 08:41:18.359357] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:39.420 [2024-11-19 08:41:18.530182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.420 [2024-11-19 08:41:18.530247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:39.420 [2024-11-19 08:41:18.530295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:39.420 [2024-11-19 08:41:18.530321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.420 [2024-11-19 08:41:18.530417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.421 [2024-11-19 08:41:18.530435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:39.421 [2024-11-19 08:41:18.530456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:39.421 [2024-11-19 08:41:18.530465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.421 [2024-11-19 08:41:18.530493] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:39.421 [2024-11-19 08:41:18.531504] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:39.421 [2024-11-19 08:41:18.531564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.421 [2024-11-19 08:41:18.531578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:39.421 [2024-11-19 08:41:18.531591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:22:39.421 [2024-11-19 08:41:18.531602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.421 [2024-11-19 08:41:18.532941] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:39.421 [2024-11-19 08:41:18.550261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.421 [2024-11-19 08:41:18.550521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:39.421 [2024-11-19 08:41:18.550551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.321 ms 00:22:39.421 [2024-11-19 08:41:18.550565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.421 [2024-11-19 08:41:18.550708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.421 [2024-11-19 08:41:18.550730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:39.421 [2024-11-19 08:41:18.550743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:39.421 [2024-11-19 08:41:18.550754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.421 [2024-11-19 08:41:18.555710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.421 [2024-11-19 08:41:18.555760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:39.421 [2024-11-19 08:41:18.555778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.833 ms 00:22:39.421 [2024-11-19 08:41:18.555789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.421 [2024-11-19 08:41:18.556003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.421 [2024-11-19 08:41:18.556022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:39.421 [2024-11-19 08:41:18.556034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:22:39.421 [2024-11-19 08:41:18.556045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.421 [2024-11-19 08:41:18.556104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.421 [2024-11-19 08:41:18.556120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:39.421 [2024-11-19 08:41:18.556131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:39.421 [2024-11-19 08:41:18.556141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.421 [2024-11-19 08:41:18.556173] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:39.421 [2024-11-19 08:41:18.560497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.421 [2024-11-19 08:41:18.560533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:39.421 [2024-11-19 08:41:18.560563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.333 ms 00:22:39.421 [2024-11-19 08:41:18.560602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.421 [2024-11-19 08:41:18.560677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.421 [2024-11-19 08:41:18.560693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:39.421 [2024-11-19 08:41:18.560705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:39.421 [2024-11-19 08:41:18.560716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.421 [2024-11-19 08:41:18.560806] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:39.421 [2024-11-19 08:41:18.560849] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:39.421 [2024-11-19 08:41:18.560895] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:39.421 [2024-11-19 08:41:18.560924] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:39.421 [2024-11-19 08:41:18.561040] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:39.421 [2024-11-19 08:41:18.561055] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:39.421 [2024-11-19 08:41:18.561070] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:39.421 [2024-11-19 08:41:18.561084] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:39.421 [2024-11-19 08:41:18.561098] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:39.421 [2024-11-19 08:41:18.561110] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:39.421 [2024-11-19 08:41:18.561121] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:39.421 [2024-11-19 08:41:18.561131] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:39.421 [2024-11-19 08:41:18.561141] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:39.421 [2024-11-19 08:41:18.561166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.421 [2024-11-19 08:41:18.561178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:39.421 [2024-11-19 08:41:18.561189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:22:39.421 [2024-11-19 08:41:18.561201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.421 [2024-11-19 08:41:18.561325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.421 [2024-11-19 08:41:18.561371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:39.421 [2024-11-19 08:41:18.561384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:22:39.421 [2024-11-19 08:41:18.561394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.421 [2024-11-19 08:41:18.561522] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:39.421 [2024-11-19 08:41:18.561747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:39.421 [2024-11-19 08:41:18.561775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:39.421 [2024-11-19 08:41:18.561788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.421 [2024-11-19 08:41:18.561814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:39.421 [2024-11-19 08:41:18.561823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:39.421 [2024-11-19 08:41:18.561833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:39.421 [2024-11-19 08:41:18.561844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:39.421 [2024-11-19 08:41:18.561855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:39.421 [2024-11-19 08:41:18.561864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:39.421 [2024-11-19 08:41:18.561873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:39.421 [2024-11-19 08:41:18.561883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:39.421 [2024-11-19 08:41:18.561893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:39.421 [2024-11-19 08:41:18.561902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:39.421 [2024-11-19 08:41:18.561912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:39.421 [2024-11-19 08:41:18.561947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.421 [2024-11-19 08:41:18.561974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:39.421 [2024-11-19 08:41:18.561999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:39.421 [2024-11-19 08:41:18.562008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.421 [2024-11-19 08:41:18.562018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:39.421 [2024-11-19 08:41:18.562028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:39.421 [2024-11-19 08:41:18.562037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.421 [2024-11-19 08:41:18.562047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:39.421 [2024-11-19 08:41:18.562056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:39.421 [2024-11-19 08:41:18.562082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.421 [2024-11-19 08:41:18.562092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:39.421 [2024-11-19 08:41:18.562101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:39.421 [2024-11-19 08:41:18.562111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.421 [2024-11-19 08:41:18.562121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:39.421 [2024-11-19 08:41:18.562131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:39.421 [2024-11-19 08:41:18.562141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.421 [2024-11-19 08:41:18.562151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:39.421 [2024-11-19 08:41:18.562161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:39.421 [2024-11-19 08:41:18.562170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:39.421 [2024-11-19 08:41:18.562180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:39.421 [2024-11-19 08:41:18.562190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:39.422 [2024-11-19 08:41:18.562200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:39.422 [2024-11-19 08:41:18.562210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:39.422 [2024-11-19 08:41:18.562222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:39.422 [2024-11-19 08:41:18.562233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.422 [2024-11-19 08:41:18.562243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:39.422 [2024-11-19 08:41:18.562253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:39.422 [2024-11-19 08:41:18.562262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.422 [2024-11-19 08:41:18.562272] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:39.422 [2024-11-19 08:41:18.562283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:39.422 [2024-11-19 08:41:18.562294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:39.422 [2024-11-19 08:41:18.562304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.422 [2024-11-19 08:41:18.562315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:39.422 [2024-11-19 08:41:18.562326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:39.422 [2024-11-19 08:41:18.562336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:39.422 [2024-11-19 08:41:18.562346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:39.422 [2024-11-19 08:41:18.562355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:39.422 [2024-11-19 08:41:18.562365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:39.422 [2024-11-19 08:41:18.562392] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:39.422 [2024-11-19 08:41:18.562406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:39.422 [2024-11-19 08:41:18.562418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:39.422 [2024-11-19 08:41:18.562428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:39.422 [2024-11-19 08:41:18.562438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:39.422 [2024-11-19 08:41:18.562449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:39.422 [2024-11-19 08:41:18.562459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:39.422 [2024-11-19 08:41:18.562469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:39.422 [2024-11-19 08:41:18.562479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:39.422 [2024-11-19 08:41:18.562490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:39.422 [2024-11-19 08:41:18.562500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:39.422 [2024-11-19 08:41:18.562510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:39.422 [2024-11-19 08:41:18.562521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:39.422 [2024-11-19 08:41:18.562531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:39.422 [2024-11-19 08:41:18.562557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:39.422 [2024-11-19 08:41:18.562569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:39.422 [2024-11-19 08:41:18.562580] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:39.422 [2024-11-19 08:41:18.562617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:39.422 [2024-11-19 08:41:18.562634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:39.422 [2024-11-19 08:41:18.562646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:39.422 [2024-11-19 08:41:18.562657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:39.422 [2024-11-19 08:41:18.562667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:39.422 [2024-11-19 08:41:18.562681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.422 [2024-11-19 08:41:18.562692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:39.422 [2024-11-19 08:41:18.562704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.230 ms 00:22:39.422 [2024-11-19 08:41:18.562715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.422 [2024-11-19 08:41:18.599507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.422 [2024-11-19 08:41:18.599600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:39.422 [2024-11-19 08:41:18.599641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.715 ms 00:22:39.422 [2024-11-19 08:41:18.599654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.422 [2024-11-19 08:41:18.599788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.422 [2024-11-19 08:41:18.599804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:39.422 [2024-11-19 08:41:18.599817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:39.422 [2024-11-19 08:41:18.599828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.422 [2024-11-19 08:41:18.657050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.422 [2024-11-19 08:41:18.657315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:39.422 [2024-11-19 08:41:18.657362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.097 ms 00:22:39.422 [2024-11-19 08:41:18.657375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.422 [2024-11-19 08:41:18.657450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.422 [2024-11-19 08:41:18.657467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:39.422 [2024-11-19 08:41:18.657480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:39.422 [2024-11-19 08:41:18.657509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.422 [2024-11-19 08:41:18.658022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.422 [2024-11-19 08:41:18.658044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:39.422 [2024-11-19 08:41:18.658058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:22:39.422 [2024-11-19 08:41:18.658069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.422 [2024-11-19 08:41:18.658237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.422 [2024-11-19 08:41:18.658259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:39.422 [2024-11-19 08:41:18.658271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:22:39.422 [2024-11-19 08:41:18.658297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.422 [2024-11-19 08:41:18.677817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.422 [2024-11-19 08:41:18.677869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:39.422 [2024-11-19 08:41:18.677913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.474 ms 00:22:39.422 [2024-11-19 08:41:18.677924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.422 [2024-11-19 08:41:18.695402] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:39.422 [2024-11-19 08:41:18.695446] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:39.422 [2024-11-19 08:41:18.695479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.422 [2024-11-19 08:41:18.695498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:39.422 [2024-11-19 08:41:18.695511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.378 ms 00:22:39.422 [2024-11-19 08:41:18.695544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.682 [2024-11-19 08:41:18.727237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.682 [2024-11-19 08:41:18.727294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:39.682 [2024-11-19 08:41:18.727358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.641 ms 00:22:39.682 [2024-11-19 08:41:18.727370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.682 [2024-11-19 08:41:18.743573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.682 [2024-11-19 08:41:18.743647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:39.682 [2024-11-19 08:41:18.743666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.156 ms 00:22:39.682 [2024-11-19 08:41:18.743677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.682 [2024-11-19 08:41:18.759373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.682 [2024-11-19 08:41:18.759414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:39.682 [2024-11-19 08:41:18.759461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.650 ms 00:22:39.682 [2024-11-19 08:41:18.759470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.682 [2024-11-19 08:41:18.760459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.682 [2024-11-19 08:41:18.760661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:39.682 [2024-11-19 08:41:18.760689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:22:39.682 [2024-11-19 08:41:18.760701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.682 [2024-11-19 08:41:18.838295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.682 [2024-11-19 08:41:18.838404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:39.682 [2024-11-19 08:41:18.838426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.545 ms 00:22:39.682 [2024-11-19 08:41:18.838454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.682 [2024-11-19 08:41:18.852570] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:39.682 [2024-11-19 08:41:18.855698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.682 [2024-11-19 08:41:18.855739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:39.682 [2024-11-19 08:41:18.855765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.154 ms 00:22:39.682 [2024-11-19 08:41:18.855776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.682 [2024-11-19 08:41:18.855914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.682 [2024-11-19 08:41:18.855948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:39.682 [2024-11-19 08:41:18.855989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:39.682 [2024-11-19 08:41:18.856001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.682 [2024-11-19 08:41:18.856101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.682 [2024-11-19 08:41:18.856122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:39.682 [2024-11-19 08:41:18.856135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:39.682 [2024-11-19 08:41:18.856146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.682 [2024-11-19 08:41:18.856177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.682 [2024-11-19 08:41:18.856192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:39.682 [2024-11-19 08:41:18.856204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:39.682 [2024-11-19 08:41:18.856215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.682 [2024-11-19 08:41:18.856257] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:39.682 [2024-11-19 08:41:18.856273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.682 [2024-11-19 08:41:18.856319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:39.682 [2024-11-19 08:41:18.856346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:39.682 [2024-11-19 08:41:18.856356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.682 [2024-11-19 08:41:18.887918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.682 [2024-11-19 08:41:18.888137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:39.682 [2024-11-19 08:41:18.888261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.540 ms 00:22:39.682 [2024-11-19 08:41:18.888314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.682 [2024-11-19 08:41:18.888448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.682 [2024-11-19 08:41:18.888605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:39.682 [2024-11-19 08:41:18.888680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:39.682 [2024-11-19 08:41:18.888792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.682 [2024-11-19 08:41:18.890007] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 359.277 ms, result 0 00:22:40.619  [2024-11-19T08:41:21.294Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-19T08:41:22.230Z] Copying: 52/1024 [MB] (26 MBps) [2024-11-19T08:41:23.165Z] Copying: 77/1024 [MB] (25 MBps) [2024-11-19T08:41:24.101Z] Copying: 103/1024 [MB] (25 MBps) [2024-11-19T08:41:25.037Z] Copying: 130/1024 [MB] (26 MBps) [2024-11-19T08:41:25.972Z] Copying: 157/1024 [MB] (27 MBps) [2024-11-19T08:41:26.908Z] Copying: 183/1024 [MB] (26 MBps) [2024-11-19T08:41:28.283Z] Copying: 210/1024 [MB] (26 MBps) [2024-11-19T08:41:29.228Z] Copying: 236/1024 [MB] (26 MBps) [2024-11-19T08:41:30.163Z] Copying: 263/1024 [MB] (26 MBps) [2024-11-19T08:41:31.100Z] Copying: 290/1024 [MB] (26 MBps) [2024-11-19T08:41:32.035Z] Copying: 317/1024 [MB] (26 MBps) [2024-11-19T08:41:32.972Z] Copying: 344/1024 [MB] (26 MBps) [2024-11-19T08:41:33.909Z] Copying: 369/1024 [MB] (25 MBps) [2024-11-19T08:41:35.295Z] Copying: 393/1024 [MB] (24 MBps) [2024-11-19T08:41:36.231Z] Copying: 420/1024 [MB] (26 MBps) [2024-11-19T08:41:37.166Z] Copying: 446/1024 [MB] (26 MBps) [2024-11-19T08:41:38.100Z] Copying: 472/1024 [MB] (25 MBps) [2024-11-19T08:41:39.035Z] Copying: 497/1024 [MB] (25 MBps) [2024-11-19T08:41:40.082Z] Copying: 523/1024 [MB] (26 MBps) [2024-11-19T08:41:41.019Z] Copying: 550/1024 [MB] (26 MBps) [2024-11-19T08:41:41.956Z] Copying: 577/1024 [MB] (27 MBps) [2024-11-19T08:41:43.334Z] Copying: 604/1024 [MB] (27 MBps) [2024-11-19T08:41:44.271Z] Copying: 631/1024 [MB] (26 MBps) [2024-11-19T08:41:45.208Z] Copying: 657/1024 [MB] (26 MBps) [2024-11-19T08:41:46.145Z] Copying: 681/1024 [MB] (23 MBps) [2024-11-19T08:41:47.082Z] Copying: 706/1024 [MB] (24 MBps) [2024-11-19T08:41:48.020Z] Copying: 732/1024 [MB] (25 MBps) [2024-11-19T08:41:48.955Z] Copying: 757/1024 [MB] (25 MBps) [2024-11-19T08:41:50.332Z] Copying: 783/1024 [MB] (25 MBps) [2024-11-19T08:41:51.269Z] Copying: 808/1024 [MB] (25 MBps) [2024-11-19T08:41:52.205Z] Copying: 833/1024 [MB] (24 MBps) [2024-11-19T08:41:53.184Z] Copying: 858/1024 [MB] (24 MBps) [2024-11-19T08:41:54.120Z] Copying: 885/1024 [MB] (26 MBps) [2024-11-19T08:41:55.057Z] Copying: 910/1024 [MB] (25 MBps) [2024-11-19T08:41:55.993Z] Copying: 936/1024 [MB] (25 MBps) [2024-11-19T08:41:56.929Z] Copying: 961/1024 [MB] (25 MBps) [2024-11-19T08:41:58.303Z] Copying: 987/1024 [MB] (25 MBps) [2024-11-19T08:41:58.562Z] Copying: 1013/1024 [MB] (25 MBps) [2024-11-19T08:41:58.562Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-19 08:41:58.321999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.266 [2024-11-19 08:41:58.322078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:19.266 [2024-11-19 08:41:58.322108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:19.266 [2024-11-19 08:41:58.322141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.266 [2024-11-19 08:41:58.322172] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:19.267 [2024-11-19 08:41:58.325634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.267 [2024-11-19 08:41:58.325677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:19.267 [2024-11-19 08:41:58.325693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.426 ms 00:23:19.267 [2024-11-19 08:41:58.325704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.267 [2024-11-19 08:41:58.327237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.267 [2024-11-19 08:41:58.327283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:19.267 [2024-11-19 08:41:58.327299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:23:19.267 [2024-11-19 08:41:58.327326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.267 [2024-11-19 08:41:58.343099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.267 [2024-11-19 08:41:58.343140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:19.267 [2024-11-19 08:41:58.343172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.752 ms 00:23:19.267 [2024-11-19 08:41:58.343183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.267 [2024-11-19 08:41:58.349725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.267 [2024-11-19 08:41:58.349766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:19.267 [2024-11-19 08:41:58.349796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.503 ms 00:23:19.267 [2024-11-19 08:41:58.349806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.267 [2024-11-19 08:41:58.380443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.267 [2024-11-19 08:41:58.380639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:19.267 [2024-11-19 08:41:58.380668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.577 ms 00:23:19.267 [2024-11-19 08:41:58.380681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.267 [2024-11-19 08:41:58.398430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.267 [2024-11-19 08:41:58.398470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:19.267 [2024-11-19 08:41:58.398503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.706 ms 00:23:19.267 [2024-11-19 08:41:58.398515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.267 [2024-11-19 08:41:58.398682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.267 [2024-11-19 08:41:58.398701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:19.267 [2024-11-19 08:41:58.398721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:23:19.267 [2024-11-19 08:41:58.398731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.267 [2024-11-19 08:41:58.428373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.267 [2024-11-19 08:41:58.428558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:19.267 [2024-11-19 08:41:58.428584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.624 ms 00:23:19.267 [2024-11-19 08:41:58.428598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.267 [2024-11-19 08:41:58.459051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.267 [2024-11-19 08:41:58.459091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:19.267 [2024-11-19 08:41:58.459138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.377 ms 00:23:19.267 [2024-11-19 08:41:58.459149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.267 [2024-11-19 08:41:58.490360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.267 [2024-11-19 08:41:58.490401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:19.267 [2024-11-19 08:41:58.490433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.169 ms 00:23:19.267 [2024-11-19 08:41:58.490443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.267 [2024-11-19 08:41:58.520937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.267 [2024-11-19 08:41:58.520992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:19.267 [2024-11-19 08:41:58.521025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.409 ms 00:23:19.267 [2024-11-19 08:41:58.521036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.267 [2024-11-19 08:41:58.521079] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:19.267 [2024-11-19 08:41:58.521119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:19.267 [2024-11-19 08:41:58.521762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.521999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:19.268 [2024-11-19 08:41:58.522354] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:19.268 [2024-11-19 08:41:58.522371] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 940fa22b-a2a5-4996-9f72-ddb7245a8d43 00:23:19.268 [2024-11-19 08:41:58.522382] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:19.268 [2024-11-19 08:41:58.522396] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:19.268 [2024-11-19 08:41:58.522406] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:19.268 [2024-11-19 08:41:58.522417] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:19.268 [2024-11-19 08:41:58.522426] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:19.268 [2024-11-19 08:41:58.522437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:19.268 [2024-11-19 08:41:58.522447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:19.268 [2024-11-19 08:41:58.522468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:19.268 [2024-11-19 08:41:58.522478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:19.268 [2024-11-19 08:41:58.522489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.268 [2024-11-19 08:41:58.522500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:19.268 [2024-11-19 08:41:58.522511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.411 ms 00:23:19.268 [2024-11-19 08:41:58.522521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.268 [2024-11-19 08:41:58.538953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.268 [2024-11-19 08:41:58.538992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:19.268 [2024-11-19 08:41:58.539025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.389 ms 00:23:19.268 [2024-11-19 08:41:58.539035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.268 [2024-11-19 08:41:58.539488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.268 [2024-11-19 08:41:58.539514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:19.268 [2024-11-19 08:41:58.539528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:23:19.268 [2024-11-19 08:41:58.539547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.527 [2024-11-19 08:41:58.581534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.527 [2024-11-19 08:41:58.581580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:19.527 [2024-11-19 08:41:58.581613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.527 [2024-11-19 08:41:58.581670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.527 [2024-11-19 08:41:58.581735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.527 [2024-11-19 08:41:58.581750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:19.527 [2024-11-19 08:41:58.581761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.527 [2024-11-19 08:41:58.581772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.527 [2024-11-19 08:41:58.581919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.527 [2024-11-19 08:41:58.581947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:19.527 [2024-11-19 08:41:58.581961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.527 [2024-11-19 08:41:58.581971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.527 [2024-11-19 08:41:58.581996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.527 [2024-11-19 08:41:58.582009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:19.527 [2024-11-19 08:41:58.582021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.527 [2024-11-19 08:41:58.582031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.527 [2024-11-19 08:41:58.682065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.527 [2024-11-19 08:41:58.682121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:19.527 [2024-11-19 08:41:58.682156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.527 [2024-11-19 08:41:58.682167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.527 [2024-11-19 08:41:58.765566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.527 [2024-11-19 08:41:58.765677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:19.527 [2024-11-19 08:41:58.765713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.527 [2024-11-19 08:41:58.765725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.527 [2024-11-19 08:41:58.765827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.527 [2024-11-19 08:41:58.765852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:19.527 [2024-11-19 08:41:58.765864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.527 [2024-11-19 08:41:58.765875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.527 [2024-11-19 08:41:58.765921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.527 [2024-11-19 08:41:58.765937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:19.527 [2024-11-19 08:41:58.765948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.527 [2024-11-19 08:41:58.765959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.527 [2024-11-19 08:41:58.766094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.527 [2024-11-19 08:41:58.766118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:19.527 [2024-11-19 08:41:58.766130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.527 [2024-11-19 08:41:58.766140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.527 [2024-11-19 08:41:58.766187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.527 [2024-11-19 08:41:58.766369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:19.527 [2024-11-19 08:41:58.766394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.527 [2024-11-19 08:41:58.766406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.527 [2024-11-19 08:41:58.766473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.527 [2024-11-19 08:41:58.766495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:19.527 [2024-11-19 08:41:58.766515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.527 [2024-11-19 08:41:58.766526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.527 [2024-11-19 08:41:58.766581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:19.527 [2024-11-19 08:41:58.766597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:19.527 [2024-11-19 08:41:58.766639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:19.527 [2024-11-19 08:41:58.766653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.527 [2024-11-19 08:41:58.766817] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 444.785 ms, result 0 00:23:20.463 00:23:20.463 00:23:20.463 08:41:59 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:20.721 [2024-11-19 08:41:59.811988] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:23:20.721 [2024-11-19 08:41:59.812285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77274 ] 00:23:20.721 [2024-11-19 08:41:59.983422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.980 [2024-11-19 08:42:00.083694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.239 [2024-11-19 08:42:00.394034] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:21.239 [2024-11-19 08:42:00.394152] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:21.515 [2024-11-19 08:42:00.553875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.516 [2024-11-19 08:42:00.553958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:21.516 [2024-11-19 08:42:00.553988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:21.516 [2024-11-19 08:42:00.554000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.516 [2024-11-19 08:42:00.554065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.516 [2024-11-19 08:42:00.554084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:21.516 [2024-11-19 08:42:00.554100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:21.516 [2024-11-19 08:42:00.554111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.516 [2024-11-19 08:42:00.554141] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:21.516 [2024-11-19 08:42:00.555070] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:21.516 [2024-11-19 08:42:00.555106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.516 [2024-11-19 08:42:00.555119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:21.516 [2024-11-19 08:42:00.555131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:23:21.516 [2024-11-19 08:42:00.555142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.516 [2024-11-19 08:42:00.556279] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:21.516 [2024-11-19 08:42:00.572664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.516 [2024-11-19 08:42:00.572730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:21.516 [2024-11-19 08:42:00.572763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.387 ms 00:23:21.516 [2024-11-19 08:42:00.572775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.516 [2024-11-19 08:42:00.572853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.516 [2024-11-19 08:42:00.572872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:21.516 [2024-11-19 08:42:00.572885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:21.516 [2024-11-19 08:42:00.572896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.516 [2024-11-19 08:42:00.577195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.516 [2024-11-19 08:42:00.577237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:21.516 [2024-11-19 08:42:00.577252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.205 ms 00:23:21.516 [2024-11-19 08:42:00.577263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.516 [2024-11-19 08:42:00.577361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.516 [2024-11-19 08:42:00.577380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:21.516 [2024-11-19 08:42:00.577392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:21.516 [2024-11-19 08:42:00.577403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.516 [2024-11-19 08:42:00.577463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.516 [2024-11-19 08:42:00.577481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:21.516 [2024-11-19 08:42:00.577493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:21.516 [2024-11-19 08:42:00.577504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.516 [2024-11-19 08:42:00.577538] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:21.516 [2024-11-19 08:42:00.581907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.516 [2024-11-19 08:42:00.581944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:21.516 [2024-11-19 08:42:00.581976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.379 ms 00:23:21.516 [2024-11-19 08:42:00.581992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.516 [2024-11-19 08:42:00.582031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.516 [2024-11-19 08:42:00.582046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:21.516 [2024-11-19 08:42:00.582058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:21.516 [2024-11-19 08:42:00.582069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.516 [2024-11-19 08:42:00.582136] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:21.516 [2024-11-19 08:42:00.582175] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:21.516 [2024-11-19 08:42:00.582222] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:21.516 [2024-11-19 08:42:00.582245] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:21.516 [2024-11-19 08:42:00.582368] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:21.516 [2024-11-19 08:42:00.582392] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:21.516 [2024-11-19 08:42:00.582407] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:21.516 [2024-11-19 08:42:00.582422] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:21.516 [2024-11-19 08:42:00.582437] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:21.516 [2024-11-19 08:42:00.582448] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:21.516 [2024-11-19 08:42:00.582459] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:21.516 [2024-11-19 08:42:00.582469] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:21.516 [2024-11-19 08:42:00.582479] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:21.516 [2024-11-19 08:42:00.582498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.516 [2024-11-19 08:42:00.582509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:21.516 [2024-11-19 08:42:00.582521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:23:21.516 [2024-11-19 08:42:00.582532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.516 [2024-11-19 08:42:00.582644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.516 [2024-11-19 08:42:00.582661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:21.516 [2024-11-19 08:42:00.582673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:23:21.516 [2024-11-19 08:42:00.582684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.516 [2024-11-19 08:42:00.582809] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:21.516 [2024-11-19 08:42:00.582839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:21.516 [2024-11-19 08:42:00.582851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.516 [2024-11-19 08:42:00.582863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.516 [2024-11-19 08:42:00.582874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:21.516 [2024-11-19 08:42:00.582886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:21.516 [2024-11-19 08:42:00.582896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:21.516 [2024-11-19 08:42:00.582906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:21.516 [2024-11-19 08:42:00.582916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:21.516 [2024-11-19 08:42:00.582927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.516 [2024-11-19 08:42:00.582937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:21.516 [2024-11-19 08:42:00.582947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:21.516 [2024-11-19 08:42:00.582956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.516 [2024-11-19 08:42:00.582966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:21.516 [2024-11-19 08:42:00.582976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:21.516 [2024-11-19 08:42:00.582997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.516 [2024-11-19 08:42:00.583008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:21.516 [2024-11-19 08:42:00.583018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:21.516 [2024-11-19 08:42:00.583027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.516 [2024-11-19 08:42:00.583038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:21.516 [2024-11-19 08:42:00.583048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:21.516 [2024-11-19 08:42:00.583058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.516 [2024-11-19 08:42:00.583068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:21.516 [2024-11-19 08:42:00.583078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:21.516 [2024-11-19 08:42:00.583087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.516 [2024-11-19 08:42:00.583097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:21.516 [2024-11-19 08:42:00.583107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:21.516 [2024-11-19 08:42:00.583117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.516 [2024-11-19 08:42:00.583126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:21.516 [2024-11-19 08:42:00.583136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:21.516 [2024-11-19 08:42:00.583146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.516 [2024-11-19 08:42:00.583156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:21.516 [2024-11-19 08:42:00.583166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:21.516 [2024-11-19 08:42:00.583175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.516 [2024-11-19 08:42:00.583185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:21.516 [2024-11-19 08:42:00.583195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:21.516 [2024-11-19 08:42:00.583205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.516 [2024-11-19 08:42:00.583216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:21.516 [2024-11-19 08:42:00.583226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:21.517 [2024-11-19 08:42:00.583236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.517 [2024-11-19 08:42:00.583246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:21.517 [2024-11-19 08:42:00.583256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:21.517 [2024-11-19 08:42:00.583265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.517 [2024-11-19 08:42:00.583275] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:21.517 [2024-11-19 08:42:00.583286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:21.517 [2024-11-19 08:42:00.583297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.517 [2024-11-19 08:42:00.583307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.517 [2024-11-19 08:42:00.583318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:21.517 [2024-11-19 08:42:00.583328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:21.517 [2024-11-19 08:42:00.583338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:21.517 [2024-11-19 08:42:00.583348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:21.517 [2024-11-19 08:42:00.583358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:21.517 [2024-11-19 08:42:00.583368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:21.517 [2024-11-19 08:42:00.583380] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:21.517 [2024-11-19 08:42:00.583393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.517 [2024-11-19 08:42:00.583405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:21.517 [2024-11-19 08:42:00.583417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:21.517 [2024-11-19 08:42:00.583427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:21.517 [2024-11-19 08:42:00.583438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:21.517 [2024-11-19 08:42:00.583449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:21.517 [2024-11-19 08:42:00.583460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:21.517 [2024-11-19 08:42:00.583471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:21.517 [2024-11-19 08:42:00.583482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:21.517 [2024-11-19 08:42:00.583493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:21.517 [2024-11-19 08:42:00.583504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:21.517 [2024-11-19 08:42:00.583515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:21.517 [2024-11-19 08:42:00.583525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:21.517 [2024-11-19 08:42:00.583536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:21.517 [2024-11-19 08:42:00.583558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:21.517 [2024-11-19 08:42:00.583571] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:21.517 [2024-11-19 08:42:00.583589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.517 [2024-11-19 08:42:00.583601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:21.517 [2024-11-19 08:42:00.583626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:21.517 [2024-11-19 08:42:00.583638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:21.517 [2024-11-19 08:42:00.583649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:21.517 [2024-11-19 08:42:00.583661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-11-19 08:42:00.583673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:21.517 [2024-11-19 08:42:00.583684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:23:21.517 [2024-11-19 08:42:00.583694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-11-19 08:42:00.616497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-11-19 08:42:00.616573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:21.517 [2024-11-19 08:42:00.616592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.715 ms 00:23:21.517 [2024-11-19 08:42:00.616604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-11-19 08:42:00.616740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-11-19 08:42:00.616755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:21.517 [2024-11-19 08:42:00.616767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:21.517 [2024-11-19 08:42:00.616778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-11-19 08:42:00.665251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-11-19 08:42:00.665315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:21.517 [2024-11-19 08:42:00.665334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.382 ms 00:23:21.517 [2024-11-19 08:42:00.665346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-11-19 08:42:00.665421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-11-19 08:42:00.665438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.517 [2024-11-19 08:42:00.665450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:21.517 [2024-11-19 08:42:00.665468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-11-19 08:42:00.665888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-11-19 08:42:00.665920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.517 [2024-11-19 08:42:00.665935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:23:21.517 [2024-11-19 08:42:00.665946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-11-19 08:42:00.666105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-11-19 08:42:00.666123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.517 [2024-11-19 08:42:00.666136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:23:21.517 [2024-11-19 08:42:00.666154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-11-19 08:42:00.682946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-11-19 08:42:00.683004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.517 [2024-11-19 08:42:00.683042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.764 ms 00:23:21.517 [2024-11-19 08:42:00.683054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-11-19 08:42:00.699173] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:21.517 [2024-11-19 08:42:00.699232] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:21.517 [2024-11-19 08:42:00.699267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-11-19 08:42:00.699279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:21.517 [2024-11-19 08:42:00.699292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.079 ms 00:23:21.517 [2024-11-19 08:42:00.699303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-11-19 08:42:00.729193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-11-19 08:42:00.729273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:21.517 [2024-11-19 08:42:00.729291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.844 ms 00:23:21.517 [2024-11-19 08:42:00.729303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-11-19 08:42:00.745031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-11-19 08:42:00.745088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:21.517 [2024-11-19 08:42:00.745119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.680 ms 00:23:21.517 [2024-11-19 08:42:00.745130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-11-19 08:42:00.760250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-11-19 08:42:00.760305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:21.517 [2024-11-19 08:42:00.760337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.075 ms 00:23:21.517 [2024-11-19 08:42:00.760348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-11-19 08:42:00.761177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-11-19 08:42:00.761229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:21.517 [2024-11-19 08:42:00.761243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:23:21.517 [2024-11-19 08:42:00.761260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.794 [2024-11-19 08:42:00.834121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.794 [2024-11-19 08:42:00.834209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:21.794 [2024-11-19 08:42:00.834252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.835 ms 00:23:21.794 [2024-11-19 08:42:00.834264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.794 [2024-11-19 08:42:00.847268] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:21.794 [2024-11-19 08:42:00.849920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.794 [2024-11-19 08:42:00.849968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:21.794 [2024-11-19 08:42:00.849999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.577 ms 00:23:21.794 [2024-11-19 08:42:00.850010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.794 [2024-11-19 08:42:00.850117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.794 [2024-11-19 08:42:00.850136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:21.794 [2024-11-19 08:42:00.850149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:21.794 [2024-11-19 08:42:00.850163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.794 [2024-11-19 08:42:00.850268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.794 [2024-11-19 08:42:00.850287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:21.794 [2024-11-19 08:42:00.850299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:21.794 [2024-11-19 08:42:00.850310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.794 [2024-11-19 08:42:00.850341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.794 [2024-11-19 08:42:00.850356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:21.794 [2024-11-19 08:42:00.850368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:21.794 [2024-11-19 08:42:00.850378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.794 [2024-11-19 08:42:00.850421] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:21.794 [2024-11-19 08:42:00.850441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.795 [2024-11-19 08:42:00.850452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:21.795 [2024-11-19 08:42:00.850463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:21.795 [2024-11-19 08:42:00.850474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.795 [2024-11-19 08:42:00.880829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.795 [2024-11-19 08:42:00.880889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:21.795 [2024-11-19 08:42:00.880906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.332 ms 00:23:21.795 [2024-11-19 08:42:00.880924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.795 [2024-11-19 08:42:00.881010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.795 [2024-11-19 08:42:00.881028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:21.795 [2024-11-19 08:42:00.881041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:21.795 [2024-11-19 08:42:00.881051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.795 [2024-11-19 08:42:00.882359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.945 ms, result 0 00:23:23.166  [2024-11-19T08:42:03.398Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-19T08:42:04.332Z] Copying: 53/1024 [MB] (26 MBps) [2024-11-19T08:42:05.268Z] Copying: 79/1024 [MB] (26 MBps) [2024-11-19T08:42:06.205Z] Copying: 105/1024 [MB] (25 MBps) [2024-11-19T08:42:07.142Z] Copying: 131/1024 [MB] (26 MBps) [2024-11-19T08:42:08.518Z] Copying: 158/1024 [MB] (26 MBps) [2024-11-19T08:42:09.453Z] Copying: 183/1024 [MB] (25 MBps) [2024-11-19T08:42:10.388Z] Copying: 209/1024 [MB] (26 MBps) [2024-11-19T08:42:11.364Z] Copying: 235/1024 [MB] (25 MBps) [2024-11-19T08:42:12.318Z] Copying: 262/1024 [MB] (26 MBps) [2024-11-19T08:42:13.255Z] Copying: 288/1024 [MB] (26 MBps) [2024-11-19T08:42:14.191Z] Copying: 313/1024 [MB] (25 MBps) [2024-11-19T08:42:15.127Z] Copying: 338/1024 [MB] (24 MBps) [2024-11-19T08:42:16.501Z] Copying: 363/1024 [MB] (25 MBps) [2024-11-19T08:42:17.436Z] Copying: 388/1024 [MB] (25 MBps) [2024-11-19T08:42:18.372Z] Copying: 413/1024 [MB] (25 MBps) [2024-11-19T08:42:19.306Z] Copying: 439/1024 [MB] (26 MBps) [2024-11-19T08:42:20.242Z] Copying: 464/1024 [MB] (25 MBps) [2024-11-19T08:42:21.178Z] Copying: 489/1024 [MB] (24 MBps) [2024-11-19T08:42:22.112Z] Copying: 513/1024 [MB] (24 MBps) [2024-11-19T08:42:23.488Z] Copying: 538/1024 [MB] (25 MBps) [2024-11-19T08:42:24.422Z] Copying: 563/1024 [MB] (24 MBps) [2024-11-19T08:42:25.357Z] Copying: 589/1024 [MB] (25 MBps) [2024-11-19T08:42:26.294Z] Copying: 615/1024 [MB] (25 MBps) [2024-11-19T08:42:27.232Z] Copying: 640/1024 [MB] (25 MBps) [2024-11-19T08:42:28.221Z] Copying: 665/1024 [MB] (25 MBps) [2024-11-19T08:42:29.157Z] Copying: 690/1024 [MB] (25 MBps) [2024-11-19T08:42:30.533Z] Copying: 715/1024 [MB] (25 MBps) [2024-11-19T08:42:31.468Z] Copying: 740/1024 [MB] (24 MBps) [2024-11-19T08:42:32.404Z] Copying: 766/1024 [MB] (26 MBps) [2024-11-19T08:42:33.340Z] Copying: 791/1024 [MB] (24 MBps) [2024-11-19T08:42:34.274Z] Copying: 816/1024 [MB] (24 MBps) [2024-11-19T08:42:35.209Z] Copying: 841/1024 [MB] (25 MBps) [2024-11-19T08:42:36.144Z] Copying: 866/1024 [MB] (24 MBps) [2024-11-19T08:42:37.520Z] Copying: 890/1024 [MB] (24 MBps) [2024-11-19T08:42:38.454Z] Copying: 914/1024 [MB] (24 MBps) [2024-11-19T08:42:39.396Z] Copying: 940/1024 [MB] (25 MBps) [2024-11-19T08:42:40.340Z] Copying: 965/1024 [MB] (25 MBps) [2024-11-19T08:42:41.275Z] Copying: 989/1024 [MB] (23 MBps) [2024-11-19T08:42:41.534Z] Copying: 1014/1024 [MB] (24 MBps) [2024-11-19T08:42:41.794Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-19 08:42:41.578269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.498 [2024-11-19 08:42:41.578351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:02.498 [2024-11-19 08:42:41.578383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:02.498 [2024-11-19 08:42:41.578409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.498 [2024-11-19 08:42:41.578440] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:02.498 [2024-11-19 08:42:41.581798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.498 [2024-11-19 08:42:41.581839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:02.498 [2024-11-19 08:42:41.581861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.335 ms 00:24:02.498 [2024-11-19 08:42:41.581872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.498 [2024-11-19 08:42:41.582114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.498 [2024-11-19 08:42:41.582143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:02.498 [2024-11-19 08:42:41.582156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:24:02.498 [2024-11-19 08:42:41.582166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.498 [2024-11-19 08:42:41.585471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.498 [2024-11-19 08:42:41.585505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:02.498 [2024-11-19 08:42:41.585534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.286 ms 00:24:02.498 [2024-11-19 08:42:41.585546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.498 [2024-11-19 08:42:41.591858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.498 [2024-11-19 08:42:41.591892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:02.498 [2024-11-19 08:42:41.591921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.283 ms 00:24:02.498 [2024-11-19 08:42:41.591931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.498 [2024-11-19 08:42:41.621450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.498 [2024-11-19 08:42:41.621491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:02.498 [2024-11-19 08:42:41.621523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.448 ms 00:24:02.498 [2024-11-19 08:42:41.621534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.498 [2024-11-19 08:42:41.638602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.498 [2024-11-19 08:42:41.638668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:02.498 [2024-11-19 08:42:41.638701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.916 ms 00:24:02.498 [2024-11-19 08:42:41.638712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.498 [2024-11-19 08:42:41.639002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.498 [2024-11-19 08:42:41.639042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:02.498 [2024-11-19 08:42:41.639056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:24:02.498 [2024-11-19 08:42:41.639067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.498 [2024-11-19 08:42:41.668555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.498 [2024-11-19 08:42:41.668597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:02.498 [2024-11-19 08:42:41.668654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.468 ms 00:24:02.498 [2024-11-19 08:42:41.668666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.498 [2024-11-19 08:42:41.698333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.498 [2024-11-19 08:42:41.698384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:02.498 [2024-11-19 08:42:41.698421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.577 ms 00:24:02.498 [2024-11-19 08:42:41.698431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.498 [2024-11-19 08:42:41.726641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.498 [2024-11-19 08:42:41.726680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:02.498 [2024-11-19 08:42:41.726711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.127 ms 00:24:02.498 [2024-11-19 08:42:41.726721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.498 [2024-11-19 08:42:41.754615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.498 [2024-11-19 08:42:41.754663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:02.498 [2024-11-19 08:42:41.754680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.800 ms 00:24:02.498 [2024-11-19 08:42:41.754690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.499 [2024-11-19 08:42:41.754775] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:02.499 [2024-11-19 08:42:41.754804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.754825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.754836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.754846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.754860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.754878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.754914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.754928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.754940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.754950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.754961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.754972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.754982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.754996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:02.499 [2024-11-19 08:42:41.755813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.755834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.755854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.755872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.755889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.755908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.755921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.755932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.755943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.755954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.755965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.755976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.755987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.755998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:02.500 [2024-11-19 08:42:41.756268] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:02.500 [2024-11-19 08:42:41.756285] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 940fa22b-a2a5-4996-9f72-ddb7245a8d43 00:24:02.500 [2024-11-19 08:42:41.756296] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:02.500 [2024-11-19 08:42:41.756305] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:02.500 [2024-11-19 08:42:41.756314] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:02.500 [2024-11-19 08:42:41.756324] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:02.500 [2024-11-19 08:42:41.756333] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:02.500 [2024-11-19 08:42:41.756343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:02.500 [2024-11-19 08:42:41.756370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:02.500 [2024-11-19 08:42:41.756380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:02.500 [2024-11-19 08:42:41.756388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:02.500 [2024-11-19 08:42:41.756399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.500 [2024-11-19 08:42:41.756409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:02.500 [2024-11-19 08:42:41.756420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.628 ms 00:24:02.500 [2024-11-19 08:42:41.756430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.500 [2024-11-19 08:42:41.772854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.500 [2024-11-19 08:42:41.772899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:02.500 [2024-11-19 08:42:41.772915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.379 ms 00:24:02.500 [2024-11-19 08:42:41.772926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.500 [2024-11-19 08:42:41.773389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.500 [2024-11-19 08:42:41.773426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:02.500 [2024-11-19 08:42:41.773438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:24:02.500 [2024-11-19 08:42:41.773457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.759 [2024-11-19 08:42:41.816669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.759 [2024-11-19 08:42:41.816730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.759 [2024-11-19 08:42:41.816763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.759 [2024-11-19 08:42:41.816775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.759 [2024-11-19 08:42:41.816847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.759 [2024-11-19 08:42:41.816878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.759 [2024-11-19 08:42:41.816889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.759 [2024-11-19 08:42:41.816907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.759 [2024-11-19 08:42:41.817031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.759 [2024-11-19 08:42:41.817060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.759 [2024-11-19 08:42:41.817073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.759 [2024-11-19 08:42:41.817083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.759 [2024-11-19 08:42:41.817104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.759 [2024-11-19 08:42:41.817117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.759 [2024-11-19 08:42:41.817128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.759 [2024-11-19 08:42:41.817137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.759 [2024-11-19 08:42:41.917080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.759 [2024-11-19 08:42:41.917141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.759 [2024-11-19 08:42:41.917175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.759 [2024-11-19 08:42:41.917186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.759 [2024-11-19 08:42:42.002996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.759 [2024-11-19 08:42:42.003051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.759 [2024-11-19 08:42:42.003084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.759 [2024-11-19 08:42:42.003096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.759 [2024-11-19 08:42:42.003238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.759 [2024-11-19 08:42:42.003255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:02.759 [2024-11-19 08:42:42.003267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.759 [2024-11-19 08:42:42.003278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.759 [2024-11-19 08:42:42.003321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.759 [2024-11-19 08:42:42.003335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:02.759 [2024-11-19 08:42:42.003346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.759 [2024-11-19 08:42:42.003356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.759 [2024-11-19 08:42:42.003478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.759 [2024-11-19 08:42:42.003508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:02.759 [2024-11-19 08:42:42.003521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.759 [2024-11-19 08:42:42.003532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.759 [2024-11-19 08:42:42.003615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.759 [2024-11-19 08:42:42.003654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:02.759 [2024-11-19 08:42:42.003669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.759 [2024-11-19 08:42:42.003679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.759 [2024-11-19 08:42:42.003725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.759 [2024-11-19 08:42:42.003748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:02.759 [2024-11-19 08:42:42.003759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.759 [2024-11-19 08:42:42.003770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.759 [2024-11-19 08:42:42.003822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.759 [2024-11-19 08:42:42.003847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:02.759 [2024-11-19 08:42:42.003860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.759 [2024-11-19 08:42:42.003870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.759 [2024-11-19 08:42:42.004035] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 425.735 ms, result 0 00:24:03.696 00:24:03.696 00:24:03.696 08:42:42 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:06.224 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:06.224 08:42:45 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:06.224 [2024-11-19 08:42:45.119495] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:06.224 [2024-11-19 08:42:45.119722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77725 ] 00:24:06.224 [2024-11-19 08:42:45.312566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.224 [2024-11-19 08:42:45.431935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.482 [2024-11-19 08:42:45.726189] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:06.482 [2024-11-19 08:42:45.726287] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:06.742 [2024-11-19 08:42:45.884590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.742 [2024-11-19 08:42:45.884673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:06.742 [2024-11-19 08:42:45.884717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:06.742 [2024-11-19 08:42:45.884729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.742 [2024-11-19 08:42:45.884793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.742 [2024-11-19 08:42:45.884812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:06.742 [2024-11-19 08:42:45.884828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:06.742 [2024-11-19 08:42:45.884839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.742 [2024-11-19 08:42:45.884868] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:06.742 [2024-11-19 08:42:45.885871] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:06.742 [2024-11-19 08:42:45.885924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.742 [2024-11-19 08:42:45.885938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:06.742 [2024-11-19 08:42:45.885951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:24:06.742 [2024-11-19 08:42:45.885963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.742 [2024-11-19 08:42:45.887140] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:06.742 [2024-11-19 08:42:45.902213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.742 [2024-11-19 08:42:45.902285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:06.742 [2024-11-19 08:42:45.902318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.075 ms 00:24:06.742 [2024-11-19 08:42:45.902330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.742 [2024-11-19 08:42:45.902403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.742 [2024-11-19 08:42:45.902422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:06.742 [2024-11-19 08:42:45.902435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:06.742 [2024-11-19 08:42:45.902446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.742 [2024-11-19 08:42:45.906739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.742 [2024-11-19 08:42:45.906794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:06.742 [2024-11-19 08:42:45.906824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.170 ms 00:24:06.742 [2024-11-19 08:42:45.906835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.742 [2024-11-19 08:42:45.906930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.742 [2024-11-19 08:42:45.906949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:06.742 [2024-11-19 08:42:45.906961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:06.742 [2024-11-19 08:42:45.906971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.742 [2024-11-19 08:42:45.907025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.742 [2024-11-19 08:42:45.907058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:06.742 [2024-11-19 08:42:45.907087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:06.742 [2024-11-19 08:42:45.907098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.742 [2024-11-19 08:42:45.907131] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:06.742 [2024-11-19 08:42:45.911174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.742 [2024-11-19 08:42:45.911225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:06.742 [2024-11-19 08:42:45.911271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.052 ms 00:24:06.742 [2024-11-19 08:42:45.911289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.742 [2024-11-19 08:42:45.911333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.742 [2024-11-19 08:42:45.911350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:06.742 [2024-11-19 08:42:45.911362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:06.742 [2024-11-19 08:42:45.911373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.742 [2024-11-19 08:42:45.911418] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:06.742 [2024-11-19 08:42:45.911448] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:06.742 [2024-11-19 08:42:45.911507] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:06.742 [2024-11-19 08:42:45.911531] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:06.742 [2024-11-19 08:42:45.911671] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:06.742 [2024-11-19 08:42:45.911692] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:06.742 [2024-11-19 08:42:45.911707] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:06.742 [2024-11-19 08:42:45.911722] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:06.742 [2024-11-19 08:42:45.911736] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:06.742 [2024-11-19 08:42:45.911749] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:06.742 [2024-11-19 08:42:45.911760] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:06.742 [2024-11-19 08:42:45.911771] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:06.742 [2024-11-19 08:42:45.911782] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:06.742 [2024-11-19 08:42:45.911800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.742 [2024-11-19 08:42:45.911812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:06.742 [2024-11-19 08:42:45.911825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:24:06.742 [2024-11-19 08:42:45.911836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.742 [2024-11-19 08:42:45.911931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.742 [2024-11-19 08:42:45.911946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:06.742 [2024-11-19 08:42:45.911973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:06.742 [2024-11-19 08:42:45.911984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.742 [2024-11-19 08:42:45.912104] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:06.742 [2024-11-19 08:42:45.912139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:06.742 [2024-11-19 08:42:45.912153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:06.743 [2024-11-19 08:42:45.912165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.743 [2024-11-19 08:42:45.912177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:06.743 [2024-11-19 08:42:45.912188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:06.743 [2024-11-19 08:42:45.912200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:06.743 [2024-11-19 08:42:45.912210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:06.743 [2024-11-19 08:42:45.912221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:06.743 [2024-11-19 08:42:45.912231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:06.743 [2024-11-19 08:42:45.912242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:06.743 [2024-11-19 08:42:45.912252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:06.743 [2024-11-19 08:42:45.912262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:06.743 [2024-11-19 08:42:45.912273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:06.743 [2024-11-19 08:42:45.912283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:06.743 [2024-11-19 08:42:45.912305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.743 [2024-11-19 08:42:45.912317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:06.743 [2024-11-19 08:42:45.912327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:06.743 [2024-11-19 08:42:45.912338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.743 [2024-11-19 08:42:45.912348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:06.743 [2024-11-19 08:42:45.912359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:06.743 [2024-11-19 08:42:45.912370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:06.743 [2024-11-19 08:42:45.912380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:06.743 [2024-11-19 08:42:45.912390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:06.743 [2024-11-19 08:42:45.912400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:06.743 [2024-11-19 08:42:45.912411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:06.743 [2024-11-19 08:42:45.912421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:06.743 [2024-11-19 08:42:45.912431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:06.743 [2024-11-19 08:42:45.912442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:06.743 [2024-11-19 08:42:45.912452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:06.743 [2024-11-19 08:42:45.912462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:06.743 [2024-11-19 08:42:45.912473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:06.743 [2024-11-19 08:42:45.912484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:06.743 [2024-11-19 08:42:45.912494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:06.743 [2024-11-19 08:42:45.912505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:06.743 [2024-11-19 08:42:45.912515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:06.743 [2024-11-19 08:42:45.912526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:06.743 [2024-11-19 08:42:45.912537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:06.743 [2024-11-19 08:42:45.912548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:06.743 [2024-11-19 08:42:45.912558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.743 [2024-11-19 08:42:45.912569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:06.743 [2024-11-19 08:42:45.912579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:06.743 [2024-11-19 08:42:45.912590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.743 [2024-11-19 08:42:45.912600] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:06.743 [2024-11-19 08:42:45.912650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:06.743 [2024-11-19 08:42:45.912663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:06.743 [2024-11-19 08:42:45.912675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:06.743 [2024-11-19 08:42:45.912688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:06.743 [2024-11-19 08:42:45.912699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:06.743 [2024-11-19 08:42:45.912710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:06.743 [2024-11-19 08:42:45.912721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:06.743 [2024-11-19 08:42:45.912732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:06.743 [2024-11-19 08:42:45.912743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:06.743 [2024-11-19 08:42:45.912756] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:06.743 [2024-11-19 08:42:45.912770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:06.743 [2024-11-19 08:42:45.912783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:06.743 [2024-11-19 08:42:45.912794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:06.743 [2024-11-19 08:42:45.912806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:06.743 [2024-11-19 08:42:45.912818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:06.743 [2024-11-19 08:42:45.912829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:06.743 [2024-11-19 08:42:45.912840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:06.743 [2024-11-19 08:42:45.912851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:06.743 [2024-11-19 08:42:45.912863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:06.743 [2024-11-19 08:42:45.912874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:06.743 [2024-11-19 08:42:45.912886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:06.743 [2024-11-19 08:42:45.912898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:06.743 [2024-11-19 08:42:45.912909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:06.743 [2024-11-19 08:42:45.912921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:06.743 [2024-11-19 08:42:45.912933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:06.743 [2024-11-19 08:42:45.912946] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:06.743 [2024-11-19 08:42:45.912964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:06.743 [2024-11-19 08:42:45.912977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:06.743 [2024-11-19 08:42:45.913004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:06.743 [2024-11-19 08:42:45.913023] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:06.743 [2024-11-19 08:42:45.913042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:06.743 [2024-11-19 08:42:45.913060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.743 [2024-11-19 08:42:45.913072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:06.743 [2024-11-19 08:42:45.913084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.024 ms 00:24:06.743 [2024-11-19 08:42:45.913111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.743 [2024-11-19 08:42:45.944057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.743 [2024-11-19 08:42:45.944129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:06.743 [2024-11-19 08:42:45.944148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.857 ms 00:24:06.743 [2024-11-19 08:42:45.944159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.743 [2024-11-19 08:42:45.944271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.743 [2024-11-19 08:42:45.944286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:06.744 [2024-11-19 08:42:45.944299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:06.744 [2024-11-19 08:42:45.944310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.744 [2024-11-19 08:42:45.987138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.744 [2024-11-19 08:42:45.987211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:06.744 [2024-11-19 08:42:45.987261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.741 ms 00:24:06.744 [2024-11-19 08:42:45.987273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.744 [2024-11-19 08:42:45.987343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.744 [2024-11-19 08:42:45.987360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:06.744 [2024-11-19 08:42:45.987373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:06.744 [2024-11-19 08:42:45.987391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.744 [2024-11-19 08:42:45.987848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.744 [2024-11-19 08:42:45.987879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:06.744 [2024-11-19 08:42:45.987894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:24:06.744 [2024-11-19 08:42:45.987906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.744 [2024-11-19 08:42:45.988076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.744 [2024-11-19 08:42:45.988101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:06.744 [2024-11-19 08:42:45.988115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:24:06.744 [2024-11-19 08:42:45.988133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.744 [2024-11-19 08:42:46.004010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.744 [2024-11-19 08:42:46.004070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:06.744 [2024-11-19 08:42:46.004091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.848 ms 00:24:06.744 [2024-11-19 08:42:46.004103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.744 [2024-11-19 08:42:46.020151] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:06.744 [2024-11-19 08:42:46.020209] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:06.744 [2024-11-19 08:42:46.020246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.744 [2024-11-19 08:42:46.020259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:06.744 [2024-11-19 08:42:46.020273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.008 ms 00:24:06.744 [2024-11-19 08:42:46.020285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.002 [2024-11-19 08:42:46.050473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.002 [2024-11-19 08:42:46.050523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:07.002 [2024-11-19 08:42:46.050557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.139 ms 00:24:07.003 [2024-11-19 08:42:46.050569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.003 [2024-11-19 08:42:46.066206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.003 [2024-11-19 08:42:46.066260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:07.003 [2024-11-19 08:42:46.066307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.551 ms 00:24:07.003 [2024-11-19 08:42:46.066318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.003 [2024-11-19 08:42:46.080660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.003 [2024-11-19 08:42:46.080723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:07.003 [2024-11-19 08:42:46.080755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.298 ms 00:24:07.003 [2024-11-19 08:42:46.080766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.003 [2024-11-19 08:42:46.081560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.003 [2024-11-19 08:42:46.081606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:07.003 [2024-11-19 08:42:46.082434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:24:07.003 [2024-11-19 08:42:46.082490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.003 [2024-11-19 08:42:46.147539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.003 [2024-11-19 08:42:46.147656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:07.003 [2024-11-19 08:42:46.147699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.017 ms 00:24:07.003 [2024-11-19 08:42:46.147711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.003 [2024-11-19 08:42:46.160092] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:07.003 [2024-11-19 08:42:46.162546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.003 [2024-11-19 08:42:46.162595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:07.003 [2024-11-19 08:42:46.162660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.764 ms 00:24:07.003 [2024-11-19 08:42:46.162676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.003 [2024-11-19 08:42:46.162787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.003 [2024-11-19 08:42:46.162808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:07.003 [2024-11-19 08:42:46.162822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:07.003 [2024-11-19 08:42:46.162838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.003 [2024-11-19 08:42:46.162932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.003 [2024-11-19 08:42:46.162961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:07.003 [2024-11-19 08:42:46.162976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:07.003 [2024-11-19 08:42:46.162988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.003 [2024-11-19 08:42:46.163036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.003 [2024-11-19 08:42:46.163052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:07.003 [2024-11-19 08:42:46.163073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:07.003 [2024-11-19 08:42:46.163106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.003 [2024-11-19 08:42:46.163168] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:07.003 [2024-11-19 08:42:46.163195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.003 [2024-11-19 08:42:46.163207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:07.003 [2024-11-19 08:42:46.163220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:07.003 [2024-11-19 08:42:46.163231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.003 [2024-11-19 08:42:46.194458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.003 [2024-11-19 08:42:46.194514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:07.003 [2024-11-19 08:42:46.194547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.197 ms 00:24:07.003 [2024-11-19 08:42:46.194564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.003 [2024-11-19 08:42:46.194692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.003 [2024-11-19 08:42:46.194712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:07.003 [2024-11-19 08:42:46.194727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:07.003 [2024-11-19 08:42:46.194738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.003 [2024-11-19 08:42:46.195941] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 310.782 ms, result 0 00:24:08.003  [2024-11-19T08:42:48.233Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-19T08:42:49.608Z] Copying: 52/1024 [MB] (25 MBps) [2024-11-19T08:42:50.541Z] Copying: 77/1024 [MB] (25 MBps) [2024-11-19T08:42:51.476Z] Copying: 103/1024 [MB] (25 MBps) [2024-11-19T08:42:52.409Z] Copying: 128/1024 [MB] (24 MBps) [2024-11-19T08:42:53.343Z] Copying: 153/1024 [MB] (24 MBps) [2024-11-19T08:42:54.277Z] Copying: 179/1024 [MB] (26 MBps) [2024-11-19T08:42:55.648Z] Copying: 204/1024 [MB] (24 MBps) [2024-11-19T08:42:56.214Z] Copying: 228/1024 [MB] (24 MBps) [2024-11-19T08:42:57.588Z] Copying: 254/1024 [MB] (26 MBps) [2024-11-19T08:42:58.522Z] Copying: 280/1024 [MB] (25 MBps) [2024-11-19T08:42:59.454Z] Copying: 305/1024 [MB] (25 MBps) [2024-11-19T08:43:00.413Z] Copying: 330/1024 [MB] (24 MBps) [2024-11-19T08:43:01.349Z] Copying: 356/1024 [MB] (25 MBps) [2024-11-19T08:43:02.284Z] Copying: 381/1024 [MB] (25 MBps) [2024-11-19T08:43:03.219Z] Copying: 406/1024 [MB] (25 MBps) [2024-11-19T08:43:04.595Z] Copying: 431/1024 [MB] (24 MBps) [2024-11-19T08:43:05.531Z] Copying: 455/1024 [MB] (24 MBps) [2024-11-19T08:43:06.467Z] Copying: 481/1024 [MB] (25 MBps) [2024-11-19T08:43:07.403Z] Copying: 507/1024 [MB] (26 MBps) [2024-11-19T08:43:08.339Z] Copying: 532/1024 [MB] (24 MBps) [2024-11-19T08:43:09.274Z] Copying: 557/1024 [MB] (25 MBps) [2024-11-19T08:43:10.650Z] Copying: 582/1024 [MB] (25 MBps) [2024-11-19T08:43:11.216Z] Copying: 608/1024 [MB] (25 MBps) [2024-11-19T08:43:12.589Z] Copying: 633/1024 [MB] (25 MBps) [2024-11-19T08:43:13.522Z] Copying: 659/1024 [MB] (25 MBps) [2024-11-19T08:43:14.456Z] Copying: 684/1024 [MB] (24 MBps) [2024-11-19T08:43:15.391Z] Copying: 709/1024 [MB] (25 MBps) [2024-11-19T08:43:16.328Z] Copying: 734/1024 [MB] (25 MBps) [2024-11-19T08:43:17.263Z] Copying: 759/1024 [MB] (24 MBps) [2024-11-19T08:43:18.639Z] Copying: 784/1024 [MB] (24 MBps) [2024-11-19T08:43:19.573Z] Copying: 809/1024 [MB] (25 MBps) [2024-11-19T08:43:20.506Z] Copying: 835/1024 [MB] (25 MBps) [2024-11-19T08:43:21.475Z] Copying: 860/1024 [MB] (25 MBps) [2024-11-19T08:43:22.410Z] Copying: 886/1024 [MB] (25 MBps) [2024-11-19T08:43:23.345Z] Copying: 912/1024 [MB] (26 MBps) [2024-11-19T08:43:24.277Z] Copying: 937/1024 [MB] (24 MBps) [2024-11-19T08:43:25.212Z] Copying: 962/1024 [MB] (25 MBps) [2024-11-19T08:43:26.587Z] Copying: 987/1024 [MB] (24 MBps) [2024-11-19T08:43:27.522Z] Copying: 1012/1024 [MB] (24 MBps) [2024-11-19T08:43:27.780Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-19T08:43:27.780Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-19 08:43:27.775705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.484 [2024-11-19 08:43:27.775786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:48.485 [2024-11-19 08:43:27.775824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:48.485 [2024-11-19 08:43:27.775851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.485 [2024-11-19 08:43:27.777746] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:48.743 [2024-11-19 08:43:27.783410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.743 [2024-11-19 08:43:27.783460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:48.743 [2024-11-19 08:43:27.783475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.593 ms 00:24:48.743 [2024-11-19 08:43:27.783486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.743 [2024-11-19 08:43:27.796300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.744 [2024-11-19 08:43:27.796354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:48.744 [2024-11-19 08:43:27.796371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.732 ms 00:24:48.744 [2024-11-19 08:43:27.796382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.744 [2024-11-19 08:43:27.817604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.744 [2024-11-19 08:43:27.817681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:48.744 [2024-11-19 08:43:27.817698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.193 ms 00:24:48.744 [2024-11-19 08:43:27.817710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.744 [2024-11-19 08:43:27.823644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.744 [2024-11-19 08:43:27.823692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:48.744 [2024-11-19 08:43:27.823707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.889 ms 00:24:48.744 [2024-11-19 08:43:27.823718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.744 [2024-11-19 08:43:27.851290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.744 [2024-11-19 08:43:27.851339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:48.744 [2024-11-19 08:43:27.851354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.463 ms 00:24:48.744 [2024-11-19 08:43:27.851364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.744 [2024-11-19 08:43:27.867452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.744 [2024-11-19 08:43:27.867507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:48.744 [2024-11-19 08:43:27.867521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.048 ms 00:24:48.744 [2024-11-19 08:43:27.867532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.744 [2024-11-19 08:43:27.970551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.744 [2024-11-19 08:43:27.970690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:48.744 [2024-11-19 08:43:27.970712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.973 ms 00:24:48.744 [2024-11-19 08:43:27.970724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.744 [2024-11-19 08:43:28.002141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.744 [2024-11-19 08:43:28.002192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:48.744 [2024-11-19 08:43:28.002223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.394 ms 00:24:48.744 [2024-11-19 08:43:28.002235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.744 [2024-11-19 08:43:28.030993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.744 [2024-11-19 08:43:28.031052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:48.744 [2024-11-19 08:43:28.031067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.716 ms 00:24:48.744 [2024-11-19 08:43:28.031078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.003 [2024-11-19 08:43:28.060549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.003 [2024-11-19 08:43:28.060599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:49.003 [2024-11-19 08:43:28.060614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.431 ms 00:24:49.003 [2024-11-19 08:43:28.060673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.003 [2024-11-19 08:43:28.091651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.003 [2024-11-19 08:43:28.091702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:49.003 [2024-11-19 08:43:28.091718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.893 ms 00:24:49.003 [2024-11-19 08:43:28.091730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.003 [2024-11-19 08:43:28.091775] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:49.003 [2024-11-19 08:43:28.091799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118272 / 261120 wr_cnt: 1 state: open 00:24:49.003 [2024-11-19 08:43:28.091814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.091827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.091839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.091852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.091864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.091876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.091888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.091900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.091913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.091925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.091937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.091978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.091990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:49.003 [2024-11-19 08:43:28.092238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.092975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.093001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.093027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.093039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.093073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.093085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.093095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:49.004 [2024-11-19 08:43:28.093114] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:49.004 [2024-11-19 08:43:28.093125] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 940fa22b-a2a5-4996-9f72-ddb7245a8d43 00:24:49.004 [2024-11-19 08:43:28.093135] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118272 00:24:49.004 [2024-11-19 08:43:28.093145] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119232 00:24:49.004 [2024-11-19 08:43:28.093155] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118272 00:24:49.004 [2024-11-19 08:43:28.093166] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:24:49.004 [2024-11-19 08:43:28.093177] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:49.004 [2024-11-19 08:43:28.093193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:49.004 [2024-11-19 08:43:28.093230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:49.004 [2024-11-19 08:43:28.093240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:49.004 [2024-11-19 08:43:28.093250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:49.004 [2024-11-19 08:43:28.093261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.004 [2024-11-19 08:43:28.093272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:49.004 [2024-11-19 08:43:28.093283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:24:49.004 [2024-11-19 08:43:28.093326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.004 [2024-11-19 08:43:28.110011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.004 [2024-11-19 08:43:28.110047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:49.004 [2024-11-19 08:43:28.110063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.620 ms 00:24:49.004 [2024-11-19 08:43:28.110081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.004 [2024-11-19 08:43:28.110537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.004 [2024-11-19 08:43:28.110554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:49.004 [2024-11-19 08:43:28.110567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:24:49.004 [2024-11-19 08:43:28.110578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.004 [2024-11-19 08:43:28.152509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.004 [2024-11-19 08:43:28.152571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.005 [2024-11-19 08:43:28.152607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.005 [2024-11-19 08:43:28.152619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.005 [2024-11-19 08:43:28.152701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.005 [2024-11-19 08:43:28.152718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.005 [2024-11-19 08:43:28.152730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.005 [2024-11-19 08:43:28.152741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.005 [2024-11-19 08:43:28.152890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.005 [2024-11-19 08:43:28.152909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.005 [2024-11-19 08:43:28.152921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.005 [2024-11-19 08:43:28.152938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.005 [2024-11-19 08:43:28.152961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.005 [2024-11-19 08:43:28.152976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.005 [2024-11-19 08:43:28.152988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.005 [2024-11-19 08:43:28.152998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.005 [2024-11-19 08:43:28.244731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.005 [2024-11-19 08:43:28.244801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.005 [2024-11-19 08:43:28.244824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.005 [2024-11-19 08:43:28.244834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.264 [2024-11-19 08:43:28.321738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.264 [2024-11-19 08:43:28.321806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.264 [2024-11-19 08:43:28.321824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.264 [2024-11-19 08:43:28.321835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.264 [2024-11-19 08:43:28.321913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.264 [2024-11-19 08:43:28.321929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.264 [2024-11-19 08:43:28.321940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.264 [2024-11-19 08:43:28.321952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.264 [2024-11-19 08:43:28.322029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.264 [2024-11-19 08:43:28.322077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.264 [2024-11-19 08:43:28.322089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.264 [2024-11-19 08:43:28.322101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.264 [2024-11-19 08:43:28.322219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.264 [2024-11-19 08:43:28.322239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.264 [2024-11-19 08:43:28.322251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.264 [2024-11-19 08:43:28.322263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.264 [2024-11-19 08:43:28.322318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.264 [2024-11-19 08:43:28.322335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:49.264 [2024-11-19 08:43:28.322348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.264 [2024-11-19 08:43:28.322359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.264 [2024-11-19 08:43:28.322401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.264 [2024-11-19 08:43:28.322416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.264 [2024-11-19 08:43:28.322428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.264 [2024-11-19 08:43:28.322439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.264 [2024-11-19 08:43:28.322539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.264 [2024-11-19 08:43:28.322555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.264 [2024-11-19 08:43:28.322568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.264 [2024-11-19 08:43:28.322579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.264 [2024-11-19 08:43:28.322738] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 550.679 ms, result 0 00:24:50.643 00:24:50.643 00:24:50.643 08:43:29 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:50.643 [2024-11-19 08:43:29.717987] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:24:50.643 [2024-11-19 08:43:29.718182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78169 ] 00:24:50.643 [2024-11-19 08:43:29.897848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.903 [2024-11-19 08:43:29.986148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.162 [2024-11-19 08:43:30.273129] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:51.162 [2024-11-19 08:43:30.273231] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:51.162 [2024-11-19 08:43:30.431112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-19 08:43:30.431189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:51.162 [2024-11-19 08:43:30.431245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:51.162 [2024-11-19 08:43:30.431256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-19 08:43:30.431317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-19 08:43:30.431335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:51.162 [2024-11-19 08:43:30.431350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:51.162 [2024-11-19 08:43:30.431361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-19 08:43:30.431390] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:51.162 [2024-11-19 08:43:30.432431] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:51.162 [2024-11-19 08:43:30.432469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-19 08:43:30.432483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:51.162 [2024-11-19 08:43:30.432495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:24:51.162 [2024-11-19 08:43:30.432505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-19 08:43:30.433744] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:51.162 [2024-11-19 08:43:30.449040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-19 08:43:30.449095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:51.162 [2024-11-19 08:43:30.449127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.298 ms 00:24:51.162 [2024-11-19 08:43:30.449137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-19 08:43:30.449224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-19 08:43:30.449243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:51.162 [2024-11-19 08:43:30.449254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:51.162 [2024-11-19 08:43:30.449264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-19 08:43:30.453908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-19 08:43:30.453957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:51.162 [2024-11-19 08:43:30.453974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.522 ms 00:24:51.162 [2024-11-19 08:43:30.453985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-19 08:43:30.454098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-19 08:43:30.454118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:51.162 [2024-11-19 08:43:30.454131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:51.162 [2024-11-19 08:43:30.454142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-19 08:43:30.454201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-19 08:43:30.454218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:51.162 [2024-11-19 08:43:30.454230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:51.162 [2024-11-19 08:43:30.454241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-19 08:43:30.454273] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:51.423 [2024-11-19 08:43:30.458808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.423 [2024-11-19 08:43:30.458864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:51.423 [2024-11-19 08:43:30.458881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.543 ms 00:24:51.423 [2024-11-19 08:43:30.458897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.423 [2024-11-19 08:43:30.458937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.423 [2024-11-19 08:43:30.458952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:51.423 [2024-11-19 08:43:30.458964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:51.423 [2024-11-19 08:43:30.458975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.423 [2024-11-19 08:43:30.459053] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:51.423 [2024-11-19 08:43:30.459082] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:51.423 [2024-11-19 08:43:30.459173] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:51.423 [2024-11-19 08:43:30.459206] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:51.423 [2024-11-19 08:43:30.459332] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:51.423 [2024-11-19 08:43:30.459347] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:51.423 [2024-11-19 08:43:30.459360] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:51.423 [2024-11-19 08:43:30.459374] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:51.423 [2024-11-19 08:43:30.459387] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:51.423 [2024-11-19 08:43:30.459398] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:51.423 [2024-11-19 08:43:30.459408] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:51.423 [2024-11-19 08:43:30.459418] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:51.423 [2024-11-19 08:43:30.459428] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:51.423 [2024-11-19 08:43:30.459444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.423 [2024-11-19 08:43:30.459456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:51.423 [2024-11-19 08:43:30.459468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:24:51.423 [2024-11-19 08:43:30.459478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.423 [2024-11-19 08:43:30.459655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.423 [2024-11-19 08:43:30.459679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:51.423 [2024-11-19 08:43:30.459691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:24:51.423 [2024-11-19 08:43:30.459702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.423 [2024-11-19 08:43:30.459849] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:51.423 [2024-11-19 08:43:30.459874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:51.423 [2024-11-19 08:43:30.459886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.423 [2024-11-19 08:43:30.459897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.423 [2024-11-19 08:43:30.459908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:51.423 [2024-11-19 08:43:30.459918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:51.423 [2024-11-19 08:43:30.459929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:51.423 [2024-11-19 08:43:30.459939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:51.423 [2024-11-19 08:43:30.459949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:51.423 [2024-11-19 08:43:30.459959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.423 [2024-11-19 08:43:30.459969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:51.423 [2024-11-19 08:43:30.459979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:51.423 [2024-11-19 08:43:30.460003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:51.423 [2024-11-19 08:43:30.460014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:51.423 [2024-11-19 08:43:30.460025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:51.423 [2024-11-19 08:43:30.460044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.423 [2024-11-19 08:43:30.460055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:51.423 [2024-11-19 08:43:30.460064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:51.423 [2024-11-19 08:43:30.460074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.423 [2024-11-19 08:43:30.460083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:51.423 [2024-11-19 08:43:30.460093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:51.423 [2024-11-19 08:43:30.460102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.423 [2024-11-19 08:43:30.460112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:51.423 [2024-11-19 08:43:30.460121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:51.423 [2024-11-19 08:43:30.460131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.423 [2024-11-19 08:43:30.460140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:51.423 [2024-11-19 08:43:30.460150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:51.423 [2024-11-19 08:43:30.460159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.423 [2024-11-19 08:43:30.460169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:51.423 [2024-11-19 08:43:30.460178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:51.423 [2024-11-19 08:43:30.460188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:51.423 [2024-11-19 08:43:30.460197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:51.423 [2024-11-19 08:43:30.460207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:51.423 [2024-11-19 08:43:30.460216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.423 [2024-11-19 08:43:30.460226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:51.423 [2024-11-19 08:43:30.460235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:51.423 [2024-11-19 08:43:30.460245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:51.423 [2024-11-19 08:43:30.460254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:51.423 [2024-11-19 08:43:30.460264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:51.423 [2024-11-19 08:43:30.460273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.423 [2024-11-19 08:43:30.460283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:51.423 [2024-11-19 08:43:30.460292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:51.423 [2024-11-19 08:43:30.460301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.423 [2024-11-19 08:43:30.460311] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:51.423 [2024-11-19 08:43:30.460321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:51.423 [2024-11-19 08:43:30.460333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:51.423 [2024-11-19 08:43:30.460343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:51.423 [2024-11-19 08:43:30.460354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:51.423 [2024-11-19 08:43:30.460364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:51.423 [2024-11-19 08:43:30.460373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:51.423 [2024-11-19 08:43:30.460383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:51.423 [2024-11-19 08:43:30.460393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:51.423 [2024-11-19 08:43:30.460402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:51.423 [2024-11-19 08:43:30.460413] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:51.423 [2024-11-19 08:43:30.460426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.423 [2024-11-19 08:43:30.460438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:51.423 [2024-11-19 08:43:30.460449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:51.423 [2024-11-19 08:43:30.460459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:51.423 [2024-11-19 08:43:30.460470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:51.423 [2024-11-19 08:43:30.460480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:51.423 [2024-11-19 08:43:30.460490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:51.424 [2024-11-19 08:43:30.460500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:51.424 [2024-11-19 08:43:30.460511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:51.424 [2024-11-19 08:43:30.460521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:51.424 [2024-11-19 08:43:30.460532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:51.424 [2024-11-19 08:43:30.460544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:51.424 [2024-11-19 08:43:30.460554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:51.424 [2024-11-19 08:43:30.460565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:51.424 [2024-11-19 08:43:30.460575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:51.424 [2024-11-19 08:43:30.460586] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:51.424 [2024-11-19 08:43:30.460602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:51.424 [2024-11-19 08:43:30.460613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:51.424 [2024-11-19 08:43:30.460641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:51.424 [2024-11-19 08:43:30.460654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:51.424 [2024-11-19 08:43:30.460664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:51.424 [2024-11-19 08:43:30.460676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.460687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:51.424 [2024-11-19 08:43:30.460699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:24:51.424 [2024-11-19 08:43:30.460709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.491083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.491154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:51.424 [2024-11-19 08:43:30.491188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.315 ms 00:24:51.424 [2024-11-19 08:43:30.491199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.491309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.491323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:51.424 [2024-11-19 08:43:30.491334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:51.424 [2024-11-19 08:43:30.491344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.540138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.540208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:51.424 [2024-11-19 08:43:30.540241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.708 ms 00:24:51.424 [2024-11-19 08:43:30.540252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.540323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.540338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:51.424 [2024-11-19 08:43:30.540350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:51.424 [2024-11-19 08:43:30.540367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.540820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.540839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:51.424 [2024-11-19 08:43:30.540852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:24:51.424 [2024-11-19 08:43:30.540862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.541025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.541043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:51.424 [2024-11-19 08:43:30.541054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:24:51.424 [2024-11-19 08:43:30.541071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.556636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.556703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:51.424 [2024-11-19 08:43:30.556739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.539 ms 00:24:51.424 [2024-11-19 08:43:30.556750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.571967] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:51.424 [2024-11-19 08:43:30.572035] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:51.424 [2024-11-19 08:43:30.572068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.572080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:51.424 [2024-11-19 08:43:30.572092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.186 ms 00:24:51.424 [2024-11-19 08:43:30.572101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.600147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.600208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:51.424 [2024-11-19 08:43:30.600241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.001 ms 00:24:51.424 [2024-11-19 08:43:30.600252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.614929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.615007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:51.424 [2024-11-19 08:43:30.615037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.631 ms 00:24:51.424 [2024-11-19 08:43:30.615047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.629163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.629214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:51.424 [2024-11-19 08:43:30.629244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.074 ms 00:24:51.424 [2024-11-19 08:43:30.629253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.630090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.630123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:51.424 [2024-11-19 08:43:30.630153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:24:51.424 [2024-11-19 08:43:30.630167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.693187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.693269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:51.424 [2024-11-19 08:43:30.693310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.998 ms 00:24:51.424 [2024-11-19 08:43:30.693320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.704298] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:51.424 [2024-11-19 08:43:30.706472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.706503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:51.424 [2024-11-19 08:43:30.706534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.074 ms 00:24:51.424 [2024-11-19 08:43:30.706545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.706665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.706700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:51.424 [2024-11-19 08:43:30.706713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:51.424 [2024-11-19 08:43:30.706728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.708343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.708377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:51.424 [2024-11-19 08:43:30.708406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:24:51.424 [2024-11-19 08:43:30.708431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.708464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.708493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:51.424 [2024-11-19 08:43:30.708504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:51.424 [2024-11-19 08:43:30.708514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.424 [2024-11-19 08:43:30.708586] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:51.424 [2024-11-19 08:43:30.708605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.424 [2024-11-19 08:43:30.708616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:51.424 [2024-11-19 08:43:30.708627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:51.424 [2024-11-19 08:43:30.708637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.683 [2024-11-19 08:43:30.737396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.683 [2024-11-19 08:43:30.737451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:51.683 [2024-11-19 08:43:30.737482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.732 ms 00:24:51.683 [2024-11-19 08:43:30.737498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.683 [2024-11-19 08:43:30.737576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.683 [2024-11-19 08:43:30.737593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:51.683 [2024-11-19 08:43:30.737605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:51.683 [2024-11-19 08:43:30.737645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.683 [2024-11-19 08:43:30.738877] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.170 ms, result 0 00:24:53.059  [2024-11-19T08:43:33.291Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-19T08:43:34.225Z] Copying: 46/1024 [MB] (24 MBps) [2024-11-19T08:43:35.161Z] Copying: 70/1024 [MB] (24 MBps) [2024-11-19T08:43:36.097Z] Copying: 94/1024 [MB] (24 MBps) [2024-11-19T08:43:37.034Z] Copying: 119/1024 [MB] (24 MBps) [2024-11-19T08:43:37.970Z] Copying: 143/1024 [MB] (24 MBps) [2024-11-19T08:43:39.351Z] Copying: 168/1024 [MB] (24 MBps) [2024-11-19T08:43:40.294Z] Copying: 192/1024 [MB] (24 MBps) [2024-11-19T08:43:41.229Z] Copying: 216/1024 [MB] (24 MBps) [2024-11-19T08:43:42.164Z] Copying: 241/1024 [MB] (24 MBps) [2024-11-19T08:43:43.100Z] Copying: 265/1024 [MB] (24 MBps) [2024-11-19T08:43:44.036Z] Copying: 289/1024 [MB] (24 MBps) [2024-11-19T08:43:44.971Z] Copying: 314/1024 [MB] (25 MBps) [2024-11-19T08:43:46.346Z] Copying: 339/1024 [MB] (24 MBps) [2024-11-19T08:43:47.280Z] Copying: 364/1024 [MB] (24 MBps) [2024-11-19T08:43:48.216Z] Copying: 388/1024 [MB] (24 MBps) [2024-11-19T08:43:49.152Z] Copying: 412/1024 [MB] (24 MBps) [2024-11-19T08:43:50.092Z] Copying: 437/1024 [MB] (24 MBps) [2024-11-19T08:43:51.026Z] Copying: 461/1024 [MB] (24 MBps) [2024-11-19T08:43:51.960Z] Copying: 485/1024 [MB] (24 MBps) [2024-11-19T08:43:53.337Z] Copying: 509/1024 [MB] (23 MBps) [2024-11-19T08:43:54.272Z] Copying: 534/1024 [MB] (24 MBps) [2024-11-19T08:43:55.208Z] Copying: 559/1024 [MB] (25 MBps) [2024-11-19T08:43:56.145Z] Copying: 583/1024 [MB] (24 MBps) [2024-11-19T08:43:57.079Z] Copying: 609/1024 [MB] (25 MBps) [2024-11-19T08:43:58.014Z] Copying: 633/1024 [MB] (24 MBps) [2024-11-19T08:43:58.951Z] Copying: 659/1024 [MB] (25 MBps) [2024-11-19T08:44:00.328Z] Copying: 681/1024 [MB] (22 MBps) [2024-11-19T08:44:01.262Z] Copying: 704/1024 [MB] (22 MBps) [2024-11-19T08:44:02.196Z] Copying: 728/1024 [MB] (23 MBps) [2024-11-19T08:44:03.132Z] Copying: 753/1024 [MB] (24 MBps) [2024-11-19T08:44:04.069Z] Copying: 779/1024 [MB] (25 MBps) [2024-11-19T08:44:05.004Z] Copying: 805/1024 [MB] (25 MBps) [2024-11-19T08:44:06.377Z] Copying: 830/1024 [MB] (25 MBps) [2024-11-19T08:44:07.313Z] Copying: 856/1024 [MB] (25 MBps) [2024-11-19T08:44:08.286Z] Copying: 882/1024 [MB] (25 MBps) [2024-11-19T08:44:09.254Z] Copying: 908/1024 [MB] (25 MBps) [2024-11-19T08:44:10.188Z] Copying: 934/1024 [MB] (26 MBps) [2024-11-19T08:44:11.123Z] Copying: 959/1024 [MB] (25 MBps) [2024-11-19T08:44:12.059Z] Copying: 985/1024 [MB] (25 MBps) [2024-11-19T08:44:12.625Z] Copying: 1011/1024 [MB] (26 MBps) [2024-11-19T08:44:12.884Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-19 08:44:12.791088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.588 [2024-11-19 08:44:12.791151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:33.588 [2024-11-19 08:44:12.791171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:33.588 [2024-11-19 08:44:12.791182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.588 [2024-11-19 08:44:12.791221] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:33.588 [2024-11-19 08:44:12.794803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.588 [2024-11-19 08:44:12.794838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:33.588 [2024-11-19 08:44:12.794853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.559 ms 00:25:33.588 [2024-11-19 08:44:12.794864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.588 [2024-11-19 08:44:12.795105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.588 [2024-11-19 08:44:12.795133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:33.588 [2024-11-19 08:44:12.795147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:25:33.588 [2024-11-19 08:44:12.795157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.588 [2024-11-19 08:44:12.800555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.588 [2024-11-19 08:44:12.800598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:33.588 [2024-11-19 08:44:12.800627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.372 ms 00:25:33.588 [2024-11-19 08:44:12.800640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.588 [2024-11-19 08:44:12.808344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.589 [2024-11-19 08:44:12.808411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:33.589 [2024-11-19 08:44:12.808451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.661 ms 00:25:33.589 [2024-11-19 08:44:12.808469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.589 [2024-11-19 08:44:12.840305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.589 [2024-11-19 08:44:12.840353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:33.589 [2024-11-19 08:44:12.840371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.721 ms 00:25:33.589 [2024-11-19 08:44:12.840382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.589 [2024-11-19 08:44:12.858045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.589 [2024-11-19 08:44:12.858122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:33.589 [2024-11-19 08:44:12.858155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.617 ms 00:25:33.589 [2024-11-19 08:44:12.858166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.848 [2024-11-19 08:44:12.984825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.848 [2024-11-19 08:44:12.984899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:33.848 [2024-11-19 08:44:12.984935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 126.593 ms 00:25:33.848 [2024-11-19 08:44:12.984947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.848 [2024-11-19 08:44:13.014468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.848 [2024-11-19 08:44:13.014522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:33.848 [2024-11-19 08:44:13.014552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.485 ms 00:25:33.848 [2024-11-19 08:44:13.014564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.848 [2024-11-19 08:44:13.044136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.848 [2024-11-19 08:44:13.044173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:33.848 [2024-11-19 08:44:13.044234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.531 ms 00:25:33.848 [2024-11-19 08:44:13.044244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.848 [2024-11-19 08:44:13.075983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.848 [2024-11-19 08:44:13.076054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:33.848 [2024-11-19 08:44:13.076099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.697 ms 00:25:33.848 [2024-11-19 08:44:13.076109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.848 [2024-11-19 08:44:13.105974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.848 [2024-11-19 08:44:13.106013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:33.848 [2024-11-19 08:44:13.106028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.753 ms 00:25:33.848 [2024-11-19 08:44:13.106053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.848 [2024-11-19 08:44:13.106094] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:33.848 [2024-11-19 08:44:13.106116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:33.848 [2024-11-19 08:44:13.106129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:33.848 [2024-11-19 08:44:13.106141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:33.848 [2024-11-19 08:44:13.106152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:33.848 [2024-11-19 08:44:13.106163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:33.848 [2024-11-19 08:44:13.106173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:33.848 [2024-11-19 08:44:13.106184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:33.848 [2024-11-19 08:44:13.106195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.106992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:33.849 [2024-11-19 08:44:13.107249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:33.850 [2024-11-19 08:44:13.107260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:33.850 [2024-11-19 08:44:13.107271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:33.850 [2024-11-19 08:44:13.107283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:33.850 [2024-11-19 08:44:13.107294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:33.850 [2024-11-19 08:44:13.107314] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:33.850 [2024-11-19 08:44:13.107325] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 940fa22b-a2a5-4996-9f72-ddb7245a8d43 00:25:33.850 [2024-11-19 08:44:13.107336] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:33.850 [2024-11-19 08:44:13.107347] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 13760 00:25:33.850 [2024-11-19 08:44:13.107358] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12800 00:25:33.850 [2024-11-19 08:44:13.107369] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0750 00:25:33.850 [2024-11-19 08:44:13.107380] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:33.850 [2024-11-19 08:44:13.107398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:33.850 [2024-11-19 08:44:13.107409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:33.850 [2024-11-19 08:44:13.107430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:33.850 [2024-11-19 08:44:13.107440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:33.850 [2024-11-19 08:44:13.107451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.850 [2024-11-19 08:44:13.107461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:33.850 [2024-11-19 08:44:13.107472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:25:33.850 [2024-11-19 08:44:13.107483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.850 [2024-11-19 08:44:13.123434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.850 [2024-11-19 08:44:13.123485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:33.850 [2024-11-19 08:44:13.123517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.893 ms 00:25:33.850 [2024-11-19 08:44:13.123536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.850 [2024-11-19 08:44:13.124004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.850 [2024-11-19 08:44:13.124036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:33.850 [2024-11-19 08:44:13.124049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:25:33.850 [2024-11-19 08:44:13.124073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.109 [2024-11-19 08:44:13.164600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.109 [2024-11-19 08:44:13.164687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:34.109 [2024-11-19 08:44:13.164723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.109 [2024-11-19 08:44:13.164734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.109 [2024-11-19 08:44:13.164792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.109 [2024-11-19 08:44:13.164806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:34.109 [2024-11-19 08:44:13.164817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.109 [2024-11-19 08:44:13.164826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.109 [2024-11-19 08:44:13.164933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.109 [2024-11-19 08:44:13.164951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:34.109 [2024-11-19 08:44:13.164964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.109 [2024-11-19 08:44:13.164980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.109 [2024-11-19 08:44:13.165002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.109 [2024-11-19 08:44:13.165015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:34.109 [2024-11-19 08:44:13.165036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.109 [2024-11-19 08:44:13.165046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.109 [2024-11-19 08:44:13.262043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.109 [2024-11-19 08:44:13.262137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:34.109 [2024-11-19 08:44:13.262179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.109 [2024-11-19 08:44:13.262190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.109 [2024-11-19 08:44:13.340208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.109 [2024-11-19 08:44:13.340282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:34.109 [2024-11-19 08:44:13.340316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.109 [2024-11-19 08:44:13.340327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.109 [2024-11-19 08:44:13.340425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.109 [2024-11-19 08:44:13.340442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:34.109 [2024-11-19 08:44:13.340453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.109 [2024-11-19 08:44:13.340463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.109 [2024-11-19 08:44:13.340512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.109 [2024-11-19 08:44:13.340526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:34.109 [2024-11-19 08:44:13.340536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.109 [2024-11-19 08:44:13.340546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.109 [2024-11-19 08:44:13.340719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.109 [2024-11-19 08:44:13.340740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:34.109 [2024-11-19 08:44:13.340753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.109 [2024-11-19 08:44:13.340763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.109 [2024-11-19 08:44:13.340816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.109 [2024-11-19 08:44:13.340834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:34.109 [2024-11-19 08:44:13.340846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.109 [2024-11-19 08:44:13.340856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.109 [2024-11-19 08:44:13.340900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.109 [2024-11-19 08:44:13.340915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:34.109 [2024-11-19 08:44:13.340926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.109 [2024-11-19 08:44:13.340937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.109 [2024-11-19 08:44:13.340992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:34.109 [2024-11-19 08:44:13.341009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:34.109 [2024-11-19 08:44:13.341021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:34.109 [2024-11-19 08:44:13.341032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.109 [2024-11-19 08:44:13.341171] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 550.081 ms, result 0 00:25:35.044 00:25:35.044 00:25:35.044 08:44:14 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:37.577 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:37.577 08:44:16 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:37.577 08:44:16 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:37.577 08:44:16 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:37.577 08:44:16 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:37.577 08:44:16 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:37.577 Process with pid 76599 is not found 00:25:37.577 Remove shared memory files 00:25:37.577 08:44:16 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76599 00:25:37.577 08:44:16 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 76599 ']' 00:25:37.577 08:44:16 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 76599 00:25:37.577 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76599) - No such process 00:25:37.577 08:44:16 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 76599 is not found' 00:25:37.577 08:44:16 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:37.577 08:44:16 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:37.577 08:44:16 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:37.577 08:44:16 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:37.577 08:44:16 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:37.577 08:44:16 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:37.577 08:44:16 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:37.577 ************************************ 00:25:37.577 END TEST ftl_restore 00:25:37.577 ************************************ 00:25:37.577 00:25:37.577 real 3m18.327s 00:25:37.577 user 3m4.508s 00:25:37.577 sys 0m16.380s 00:25:37.577 08:44:16 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:37.577 08:44:16 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:37.577 08:44:16 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:37.577 08:44:16 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:37.577 08:44:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:37.577 08:44:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:37.577 ************************************ 00:25:37.577 START TEST ftl_dirty_shutdown 00:25:37.577 ************************************ 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:37.577 * Looking for test storage... 00:25:37.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:37.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.577 --rc genhtml_branch_coverage=1 00:25:37.577 --rc genhtml_function_coverage=1 00:25:37.577 --rc genhtml_legend=1 00:25:37.577 --rc geninfo_all_blocks=1 00:25:37.577 --rc geninfo_unexecuted_blocks=1 00:25:37.577 00:25:37.577 ' 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:37.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.577 --rc genhtml_branch_coverage=1 00:25:37.577 --rc genhtml_function_coverage=1 00:25:37.577 --rc genhtml_legend=1 00:25:37.577 --rc geninfo_all_blocks=1 00:25:37.577 --rc geninfo_unexecuted_blocks=1 00:25:37.577 00:25:37.577 ' 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:37.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.577 --rc genhtml_branch_coverage=1 00:25:37.577 --rc genhtml_function_coverage=1 00:25:37.577 --rc genhtml_legend=1 00:25:37.577 --rc geninfo_all_blocks=1 00:25:37.577 --rc geninfo_unexecuted_blocks=1 00:25:37.577 00:25:37.577 ' 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:37.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.577 --rc genhtml_branch_coverage=1 00:25:37.577 --rc genhtml_function_coverage=1 00:25:37.577 --rc genhtml_legend=1 00:25:37.577 --rc geninfo_all_blocks=1 00:25:37.577 --rc geninfo_unexecuted_blocks=1 00:25:37.577 00:25:37.577 ' 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:37.577 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78703 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78703 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 78703 ']' 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.578 08:44:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:37.836 [2024-11-19 08:44:16.873243] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:37.836 [2024-11-19 08:44:16.873400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78703 ] 00:25:37.836 [2024-11-19 08:44:17.054321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.094 [2024-11-19 08:44:17.181517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.661 08:44:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.661 08:44:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:38.661 08:44:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:38.661 08:44:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:38.661 08:44:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:38.661 08:44:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:38.661 08:44:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:38.661 08:44:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:39.228 08:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:39.228 08:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:39.228 08:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:39.228 08:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:39.228 08:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:39.228 08:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:39.228 08:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:39.228 08:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:39.488 08:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:39.488 { 00:25:39.488 "name": "nvme0n1", 00:25:39.488 "aliases": [ 00:25:39.488 "7ffe6480-3aac-436e-9844-a55aa41c6949" 00:25:39.488 ], 00:25:39.488 "product_name": "NVMe disk", 00:25:39.488 "block_size": 4096, 00:25:39.488 "num_blocks": 1310720, 00:25:39.488 "uuid": "7ffe6480-3aac-436e-9844-a55aa41c6949", 00:25:39.488 "numa_id": -1, 00:25:39.488 "assigned_rate_limits": { 00:25:39.488 "rw_ios_per_sec": 0, 00:25:39.488 "rw_mbytes_per_sec": 0, 00:25:39.488 "r_mbytes_per_sec": 0, 00:25:39.488 "w_mbytes_per_sec": 0 00:25:39.488 }, 00:25:39.488 "claimed": true, 00:25:39.488 "claim_type": "read_many_write_one", 00:25:39.488 "zoned": false, 00:25:39.488 "supported_io_types": { 00:25:39.488 "read": true, 00:25:39.488 "write": true, 00:25:39.488 "unmap": true, 00:25:39.488 "flush": true, 00:25:39.488 "reset": true, 00:25:39.488 "nvme_admin": true, 00:25:39.488 "nvme_io": true, 00:25:39.488 "nvme_io_md": false, 00:25:39.488 "write_zeroes": true, 00:25:39.488 "zcopy": false, 00:25:39.488 "get_zone_info": false, 00:25:39.488 "zone_management": false, 00:25:39.488 "zone_append": false, 00:25:39.488 "compare": true, 00:25:39.488 "compare_and_write": false, 00:25:39.488 "abort": true, 00:25:39.488 "seek_hole": false, 00:25:39.488 "seek_data": false, 00:25:39.488 "copy": true, 00:25:39.488 "nvme_iov_md": false 00:25:39.488 }, 00:25:39.488 "driver_specific": { 00:25:39.488 "nvme": [ 00:25:39.488 { 00:25:39.488 "pci_address": "0000:00:11.0", 00:25:39.488 "trid": { 00:25:39.488 "trtype": "PCIe", 00:25:39.488 "traddr": "0000:00:11.0" 00:25:39.488 }, 00:25:39.488 "ctrlr_data": { 00:25:39.488 "cntlid": 0, 00:25:39.488 "vendor_id": "0x1b36", 00:25:39.488 "model_number": "QEMU NVMe Ctrl", 00:25:39.488 "serial_number": "12341", 00:25:39.488 "firmware_revision": "8.0.0", 00:25:39.488 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:39.488 "oacs": { 00:25:39.488 "security": 0, 00:25:39.488 "format": 1, 00:25:39.488 "firmware": 0, 00:25:39.488 "ns_manage": 1 00:25:39.488 }, 00:25:39.488 "multi_ctrlr": false, 00:25:39.488 "ana_reporting": false 00:25:39.488 }, 00:25:39.488 "vs": { 00:25:39.488 "nvme_version": "1.4" 00:25:39.488 }, 00:25:39.488 "ns_data": { 00:25:39.488 "id": 1, 00:25:39.488 "can_share": false 00:25:39.488 } 00:25:39.488 } 00:25:39.488 ], 00:25:39.488 "mp_policy": "active_passive" 00:25:39.488 } 00:25:39.488 } 00:25:39.488 ]' 00:25:39.488 08:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:39.488 08:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:39.488 08:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:39.488 08:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:39.488 08:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:39.488 08:44:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:39.488 08:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:39.488 08:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:39.488 08:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:39.488 08:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:39.488 08:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:39.747 08:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=80e04b89-5c13-4dee-a73c-10304ba60cd1 00:25:39.747 08:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:39.747 08:44:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 80e04b89-5c13-4dee-a73c-10304ba60cd1 00:25:40.006 08:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:40.265 08:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=32791398-951d-4c55-9b9f-52c8bae0fe21 00:25:40.265 08:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 32791398-951d-4c55-9b9f-52c8bae0fe21 00:25:40.523 08:44:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=5e59007c-990c-43c5-8ec7-3f7766122c4e 00:25:40.524 08:44:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:40.524 08:44:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5e59007c-990c-43c5-8ec7-3f7766122c4e 00:25:40.524 08:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:40.524 08:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:40.524 08:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=5e59007c-990c-43c5-8ec7-3f7766122c4e 00:25:40.524 08:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:40.524 08:44:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5e59007c-990c-43c5-8ec7-3f7766122c4e 00:25:40.524 08:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5e59007c-990c-43c5-8ec7-3f7766122c4e 00:25:40.524 08:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:40.524 08:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:40.524 08:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:40.524 08:44:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e59007c-990c-43c5-8ec7-3f7766122c4e 00:25:40.782 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:40.782 { 00:25:40.782 "name": "5e59007c-990c-43c5-8ec7-3f7766122c4e", 00:25:40.782 "aliases": [ 00:25:40.782 "lvs/nvme0n1p0" 00:25:40.782 ], 00:25:40.782 "product_name": "Logical Volume", 00:25:40.782 "block_size": 4096, 00:25:40.782 "num_blocks": 26476544, 00:25:40.782 "uuid": "5e59007c-990c-43c5-8ec7-3f7766122c4e", 00:25:40.782 "assigned_rate_limits": { 00:25:40.783 "rw_ios_per_sec": 0, 00:25:40.783 "rw_mbytes_per_sec": 0, 00:25:40.783 "r_mbytes_per_sec": 0, 00:25:40.783 "w_mbytes_per_sec": 0 00:25:40.783 }, 00:25:40.783 "claimed": false, 00:25:40.783 "zoned": false, 00:25:40.783 "supported_io_types": { 00:25:40.783 "read": true, 00:25:40.783 "write": true, 00:25:40.783 "unmap": true, 00:25:40.783 "flush": false, 00:25:40.783 "reset": true, 00:25:40.783 "nvme_admin": false, 00:25:40.783 "nvme_io": false, 00:25:40.783 "nvme_io_md": false, 00:25:40.783 "write_zeroes": true, 00:25:40.783 "zcopy": false, 00:25:40.783 "get_zone_info": false, 00:25:40.783 "zone_management": false, 00:25:40.783 "zone_append": false, 00:25:40.783 "compare": false, 00:25:40.783 "compare_and_write": false, 00:25:40.783 "abort": false, 00:25:40.783 "seek_hole": true, 00:25:40.783 "seek_data": true, 00:25:40.783 "copy": false, 00:25:40.783 "nvme_iov_md": false 00:25:40.783 }, 00:25:40.783 "driver_specific": { 00:25:40.783 "lvol": { 00:25:40.783 "lvol_store_uuid": "32791398-951d-4c55-9b9f-52c8bae0fe21", 00:25:40.783 "base_bdev": "nvme0n1", 00:25:40.783 "thin_provision": true, 00:25:40.783 "num_allocated_clusters": 0, 00:25:40.783 "snapshot": false, 00:25:40.783 "clone": false, 00:25:40.783 "esnap_clone": false 00:25:40.783 } 00:25:40.783 } 00:25:40.783 } 00:25:40.783 ]' 00:25:40.783 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:40.783 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:40.783 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:41.042 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:41.042 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:41.042 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:41.042 08:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:41.042 08:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:41.042 08:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:41.301 08:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:41.301 08:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:41.301 08:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 5e59007c-990c-43c5-8ec7-3f7766122c4e 00:25:41.301 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5e59007c-990c-43c5-8ec7-3f7766122c4e 00:25:41.301 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:41.301 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:41.301 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:41.301 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e59007c-990c-43c5-8ec7-3f7766122c4e 00:25:41.560 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:41.560 { 00:25:41.560 "name": "5e59007c-990c-43c5-8ec7-3f7766122c4e", 00:25:41.560 "aliases": [ 00:25:41.560 "lvs/nvme0n1p0" 00:25:41.560 ], 00:25:41.560 "product_name": "Logical Volume", 00:25:41.560 "block_size": 4096, 00:25:41.560 "num_blocks": 26476544, 00:25:41.560 "uuid": "5e59007c-990c-43c5-8ec7-3f7766122c4e", 00:25:41.560 "assigned_rate_limits": { 00:25:41.560 "rw_ios_per_sec": 0, 00:25:41.560 "rw_mbytes_per_sec": 0, 00:25:41.560 "r_mbytes_per_sec": 0, 00:25:41.560 "w_mbytes_per_sec": 0 00:25:41.560 }, 00:25:41.560 "claimed": false, 00:25:41.560 "zoned": false, 00:25:41.560 "supported_io_types": { 00:25:41.560 "read": true, 00:25:41.560 "write": true, 00:25:41.560 "unmap": true, 00:25:41.560 "flush": false, 00:25:41.560 "reset": true, 00:25:41.560 "nvme_admin": false, 00:25:41.560 "nvme_io": false, 00:25:41.560 "nvme_io_md": false, 00:25:41.560 "write_zeroes": true, 00:25:41.560 "zcopy": false, 00:25:41.560 "get_zone_info": false, 00:25:41.560 "zone_management": false, 00:25:41.560 "zone_append": false, 00:25:41.560 "compare": false, 00:25:41.560 "compare_and_write": false, 00:25:41.560 "abort": false, 00:25:41.560 "seek_hole": true, 00:25:41.560 "seek_data": true, 00:25:41.560 "copy": false, 00:25:41.560 "nvme_iov_md": false 00:25:41.560 }, 00:25:41.560 "driver_specific": { 00:25:41.560 "lvol": { 00:25:41.560 "lvol_store_uuid": "32791398-951d-4c55-9b9f-52c8bae0fe21", 00:25:41.560 "base_bdev": "nvme0n1", 00:25:41.560 "thin_provision": true, 00:25:41.560 "num_allocated_clusters": 0, 00:25:41.560 "snapshot": false, 00:25:41.560 "clone": false, 00:25:41.560 "esnap_clone": false 00:25:41.560 } 00:25:41.560 } 00:25:41.560 } 00:25:41.560 ]' 00:25:41.560 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:41.560 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:41.560 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:41.560 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:41.560 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:41.560 08:44:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:41.560 08:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:41.560 08:44:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:42.127 08:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:42.127 08:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 5e59007c-990c-43c5-8ec7-3f7766122c4e 00:25:42.127 08:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5e59007c-990c-43c5-8ec7-3f7766122c4e 00:25:42.127 08:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:42.127 08:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:42.127 08:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:42.127 08:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e59007c-990c-43c5-8ec7-3f7766122c4e 00:25:42.127 08:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:42.127 { 00:25:42.127 "name": "5e59007c-990c-43c5-8ec7-3f7766122c4e", 00:25:42.127 "aliases": [ 00:25:42.127 "lvs/nvme0n1p0" 00:25:42.127 ], 00:25:42.127 "product_name": "Logical Volume", 00:25:42.127 "block_size": 4096, 00:25:42.127 "num_blocks": 26476544, 00:25:42.127 "uuid": "5e59007c-990c-43c5-8ec7-3f7766122c4e", 00:25:42.127 "assigned_rate_limits": { 00:25:42.127 "rw_ios_per_sec": 0, 00:25:42.127 "rw_mbytes_per_sec": 0, 00:25:42.127 "r_mbytes_per_sec": 0, 00:25:42.127 "w_mbytes_per_sec": 0 00:25:42.127 }, 00:25:42.127 "claimed": false, 00:25:42.127 "zoned": false, 00:25:42.127 "supported_io_types": { 00:25:42.127 "read": true, 00:25:42.127 "write": true, 00:25:42.127 "unmap": true, 00:25:42.127 "flush": false, 00:25:42.127 "reset": true, 00:25:42.127 "nvme_admin": false, 00:25:42.127 "nvme_io": false, 00:25:42.127 "nvme_io_md": false, 00:25:42.127 "write_zeroes": true, 00:25:42.127 "zcopy": false, 00:25:42.127 "get_zone_info": false, 00:25:42.127 "zone_management": false, 00:25:42.127 "zone_append": false, 00:25:42.127 "compare": false, 00:25:42.127 "compare_and_write": false, 00:25:42.127 "abort": false, 00:25:42.127 "seek_hole": true, 00:25:42.127 "seek_data": true, 00:25:42.127 "copy": false, 00:25:42.127 "nvme_iov_md": false 00:25:42.127 }, 00:25:42.127 "driver_specific": { 00:25:42.128 "lvol": { 00:25:42.128 "lvol_store_uuid": "32791398-951d-4c55-9b9f-52c8bae0fe21", 00:25:42.128 "base_bdev": "nvme0n1", 00:25:42.128 "thin_provision": true, 00:25:42.128 "num_allocated_clusters": 0, 00:25:42.128 "snapshot": false, 00:25:42.128 "clone": false, 00:25:42.128 "esnap_clone": false 00:25:42.128 } 00:25:42.128 } 00:25:42.128 } 00:25:42.128 ]' 00:25:42.128 08:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:42.387 08:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:42.387 08:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:42.387 08:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:42.387 08:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:42.387 08:44:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:42.387 08:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:42.387 08:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5e59007c-990c-43c5-8ec7-3f7766122c4e --l2p_dram_limit 10' 00:25:42.387 08:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:42.387 08:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:42.387 08:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:42.387 08:44:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5e59007c-990c-43c5-8ec7-3f7766122c4e --l2p_dram_limit 10 -c nvc0n1p0 00:25:42.646 [2024-11-19 08:44:21.726506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.646 [2024-11-19 08:44:21.726561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:42.646 [2024-11-19 08:44:21.726603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:42.646 [2024-11-19 08:44:21.726615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.646 [2024-11-19 08:44:21.726746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.646 [2024-11-19 08:44:21.726798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:42.646 [2024-11-19 08:44:21.726814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:42.647 [2024-11-19 08:44:21.726826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.647 [2024-11-19 08:44:21.726874] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:42.647 [2024-11-19 08:44:21.727861] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:42.647 [2024-11-19 08:44:21.728125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.647 [2024-11-19 08:44:21.728148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:42.647 [2024-11-19 08:44:21.728164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:25:42.647 [2024-11-19 08:44:21.728177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.647 [2024-11-19 08:44:21.728337] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 68a09d1c-a153-421e-98ab-c545dfae1eab 00:25:42.647 [2024-11-19 08:44:21.729391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.647 [2024-11-19 08:44:21.729429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:42.647 [2024-11-19 08:44:21.729462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:42.647 [2024-11-19 08:44:21.729475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.647 [2024-11-19 08:44:21.733897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.647 [2024-11-19 08:44:21.733958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:42.647 [2024-11-19 08:44:21.733994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.367 ms 00:25:42.647 [2024-11-19 08:44:21.734008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.647 [2024-11-19 08:44:21.734117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.647 [2024-11-19 08:44:21.734138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:42.647 [2024-11-19 08:44:21.734150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:25:42.647 [2024-11-19 08:44:21.734167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.647 [2024-11-19 08:44:21.734226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.647 [2024-11-19 08:44:21.734247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:42.647 [2024-11-19 08:44:21.734259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:42.647 [2024-11-19 08:44:21.734274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.647 [2024-11-19 08:44:21.734304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:42.647 [2024-11-19 08:44:21.738546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.647 [2024-11-19 08:44:21.738583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:42.647 [2024-11-19 08:44:21.738621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.247 ms 00:25:42.647 [2024-11-19 08:44:21.738665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.647 [2024-11-19 08:44:21.738709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.647 [2024-11-19 08:44:21.738724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:42.647 [2024-11-19 08:44:21.738738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:42.647 [2024-11-19 08:44:21.738765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.647 [2024-11-19 08:44:21.738834] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:42.647 [2024-11-19 08:44:21.739025] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:42.647 [2024-11-19 08:44:21.739070] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:42.647 [2024-11-19 08:44:21.739087] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:42.647 [2024-11-19 08:44:21.739107] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:42.647 [2024-11-19 08:44:21.739123] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:42.647 [2024-11-19 08:44:21.739138] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:42.647 [2024-11-19 08:44:21.739150] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:42.647 [2024-11-19 08:44:21.739167] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:42.647 [2024-11-19 08:44:21.739179] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:42.647 [2024-11-19 08:44:21.739194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.647 [2024-11-19 08:44:21.739207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:42.647 [2024-11-19 08:44:21.739222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:25:42.647 [2024-11-19 08:44:21.739245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.647 [2024-11-19 08:44:21.739351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.647 [2024-11-19 08:44:21.739603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:42.647 [2024-11-19 08:44:21.739661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:42.647 [2024-11-19 08:44:21.739675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.647 [2024-11-19 08:44:21.739806] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:42.647 [2024-11-19 08:44:21.739838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:42.647 [2024-11-19 08:44:21.739857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:42.647 [2024-11-19 08:44:21.739870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.647 [2024-11-19 08:44:21.739884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:42.647 [2024-11-19 08:44:21.739910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:42.647 [2024-11-19 08:44:21.739923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:42.647 [2024-11-19 08:44:21.739934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:42.647 [2024-11-19 08:44:21.739947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:42.647 [2024-11-19 08:44:21.739971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:42.647 [2024-11-19 08:44:21.739984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:42.647 [2024-11-19 08:44:21.739994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:42.647 [2024-11-19 08:44:21.740007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:42.647 [2024-11-19 08:44:21.740017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:42.647 [2024-11-19 08:44:21.740039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:42.647 [2024-11-19 08:44:21.740049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.647 [2024-11-19 08:44:21.740063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:42.647 [2024-11-19 08:44:21.740074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:42.647 [2024-11-19 08:44:21.740088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.647 [2024-11-19 08:44:21.740099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:42.647 [2024-11-19 08:44:21.740111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:42.647 [2024-11-19 08:44:21.740121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.647 [2024-11-19 08:44:21.740133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:42.647 [2024-11-19 08:44:21.740144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:42.647 [2024-11-19 08:44:21.740156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.647 [2024-11-19 08:44:21.740166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:42.647 [2024-11-19 08:44:21.740178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:42.647 [2024-11-19 08:44:21.740203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.647 [2024-11-19 08:44:21.740214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:42.647 [2024-11-19 08:44:21.740224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:42.647 [2024-11-19 08:44:21.740236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:42.647 [2024-11-19 08:44:21.740246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:42.647 [2024-11-19 08:44:21.740260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:42.647 [2024-11-19 08:44:21.740270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:42.647 [2024-11-19 08:44:21.740282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:42.647 [2024-11-19 08:44:21.740293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:42.647 [2024-11-19 08:44:21.740305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:42.648 [2024-11-19 08:44:21.740315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:42.648 [2024-11-19 08:44:21.740327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:42.648 [2024-11-19 08:44:21.740337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.648 [2024-11-19 08:44:21.740349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:42.648 [2024-11-19 08:44:21.740359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:42.648 [2024-11-19 08:44:21.740371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.648 [2024-11-19 08:44:21.740381] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:42.648 [2024-11-19 08:44:21.740394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:42.648 [2024-11-19 08:44:21.740405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:42.648 [2024-11-19 08:44:21.740419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.648 [2024-11-19 08:44:21.740430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:42.648 [2024-11-19 08:44:21.740444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:42.648 [2024-11-19 08:44:21.740455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:42.648 [2024-11-19 08:44:21.740467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:42.648 [2024-11-19 08:44:21.740477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:42.648 [2024-11-19 08:44:21.740488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:42.648 [2024-11-19 08:44:21.740503] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:42.648 [2024-11-19 08:44:21.740520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.648 [2024-11-19 08:44:21.740535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:42.648 [2024-11-19 08:44:21.740547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:42.648 [2024-11-19 08:44:21.740558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:42.648 [2024-11-19 08:44:21.740571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:42.648 [2024-11-19 08:44:21.740583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:42.648 [2024-11-19 08:44:21.740595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:42.648 [2024-11-19 08:44:21.740606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:42.648 [2024-11-19 08:44:21.740635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:42.648 [2024-11-19 08:44:21.740646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:42.648 [2024-11-19 08:44:21.740661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:42.648 [2024-11-19 08:44:21.740705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:42.648 [2024-11-19 08:44:21.740720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:42.648 [2024-11-19 08:44:21.740732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:42.648 [2024-11-19 08:44:21.740748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:42.648 [2024-11-19 08:44:21.740760] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:42.648 [2024-11-19 08:44:21.740783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.648 [2024-11-19 08:44:21.740796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:42.648 [2024-11-19 08:44:21.740810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:42.648 [2024-11-19 08:44:21.740821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:42.648 [2024-11-19 08:44:21.740835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:42.648 [2024-11-19 08:44:21.740848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.648 [2024-11-19 08:44:21.740863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:42.648 [2024-11-19 08:44:21.740875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:25:42.648 [2024-11-19 08:44:21.740888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.648 [2024-11-19 08:44:21.740942] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:42.648 [2024-11-19 08:44:21.740980] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:44.567 [2024-11-19 08:44:23.785065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.567 [2024-11-19 08:44:23.785136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:44.567 [2024-11-19 08:44:23.785157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2044.135 ms 00:25:44.567 [2024-11-19 08:44:23.785170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.567 [2024-11-19 08:44:23.815260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.567 [2024-11-19 08:44:23.815579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:44.567 [2024-11-19 08:44:23.815641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.860 ms 00:25:44.567 [2024-11-19 08:44:23.815663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.567 [2024-11-19 08:44:23.815863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.567 [2024-11-19 08:44:23.815888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:44.567 [2024-11-19 08:44:23.815917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:44.567 [2024-11-19 08:44:23.815934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.567 [2024-11-19 08:44:23.857729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.567 [2024-11-19 08:44:23.857787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:44.567 [2024-11-19 08:44:23.857808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.736 ms 00:25:44.567 [2024-11-19 08:44:23.857825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.567 [2024-11-19 08:44:23.857884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.567 [2024-11-19 08:44:23.857908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:44.567 [2024-11-19 08:44:23.857922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:44.567 [2024-11-19 08:44:23.857936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.567 [2024-11-19 08:44:23.858308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.567 [2024-11-19 08:44:23.858333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:44.567 [2024-11-19 08:44:23.858347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:25:44.567 [2024-11-19 08:44:23.858361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.567 [2024-11-19 08:44:23.858497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.567 [2024-11-19 08:44:23.858516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:44.567 [2024-11-19 08:44:23.858532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:25:44.567 [2024-11-19 08:44:23.858548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.827 [2024-11-19 08:44:23.876623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.827 [2024-11-19 08:44:23.876880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:44.827 [2024-11-19 08:44:23.876911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.048 ms 00:25:44.827 [2024-11-19 08:44:23.876926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.827 [2024-11-19 08:44:23.890719] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:44.827 [2024-11-19 08:44:23.893456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.827 [2024-11-19 08:44:23.893493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:44.827 [2024-11-19 08:44:23.893515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.408 ms 00:25:44.827 [2024-11-19 08:44:23.893527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.827 [2024-11-19 08:44:23.963534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.827 [2024-11-19 08:44:23.963599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:44.827 [2024-11-19 08:44:23.963665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.957 ms 00:25:44.827 [2024-11-19 08:44:23.963690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.827 [2024-11-19 08:44:23.963920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.827 [2024-11-19 08:44:23.963945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:44.827 [2024-11-19 08:44:23.963964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:25:44.827 [2024-11-19 08:44:23.963977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.827 [2024-11-19 08:44:23.995380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.827 [2024-11-19 08:44:23.995423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:44.827 [2024-11-19 08:44:23.995445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.327 ms 00:25:44.827 [2024-11-19 08:44:23.995457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.827 [2024-11-19 08:44:24.025913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.827 [2024-11-19 08:44:24.026017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:44.827 [2024-11-19 08:44:24.026048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.397 ms 00:25:44.827 [2024-11-19 08:44:24.026077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.827 [2024-11-19 08:44:24.026864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.827 [2024-11-19 08:44:24.026896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:44.827 [2024-11-19 08:44:24.026915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:25:44.827 [2024-11-19 08:44:24.026927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.827 [2024-11-19 08:44:24.108639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.827 [2024-11-19 08:44:24.108725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:44.827 [2024-11-19 08:44:24.108752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.636 ms 00:25:44.827 [2024-11-19 08:44:24.108766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.087 [2024-11-19 08:44:24.140576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.087 [2024-11-19 08:44:24.140900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:45.087 [2024-11-19 08:44:24.140937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.671 ms 00:25:45.087 [2024-11-19 08:44:24.140952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.087 [2024-11-19 08:44:24.170259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.087 [2024-11-19 08:44:24.170297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:45.087 [2024-11-19 08:44:24.170316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.252 ms 00:25:45.087 [2024-11-19 08:44:24.170326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.087 [2024-11-19 08:44:24.203298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.087 [2024-11-19 08:44:24.203342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:45.087 [2024-11-19 08:44:24.203364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.924 ms 00:25:45.087 [2024-11-19 08:44:24.203377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.087 [2024-11-19 08:44:24.203436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.087 [2024-11-19 08:44:24.203455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:45.087 [2024-11-19 08:44:24.203474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:45.087 [2024-11-19 08:44:24.203486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.087 [2024-11-19 08:44:24.203701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.087 [2024-11-19 08:44:24.203724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:45.087 [2024-11-19 08:44:24.203743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:25:45.087 [2024-11-19 08:44:24.203756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.087 [2024-11-19 08:44:24.204870] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2477.868 ms, result 0 00:25:45.087 { 00:25:45.087 "name": "ftl0", 00:25:45.087 "uuid": "68a09d1c-a153-421e-98ab-c545dfae1eab" 00:25:45.087 } 00:25:45.087 08:44:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:45.087 08:44:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:45.346 08:44:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:45.346 08:44:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:45.346 08:44:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:45.605 /dev/nbd0 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:45.605 1+0 records in 00:25:45.605 1+0 records out 00:25:45.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369757 s, 11.1 MB/s 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:45.605 08:44:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:45.865 [2024-11-19 08:44:24.983032] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:45.865 [2024-11-19 08:44:24.983469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78841 ] 00:25:46.124 [2024-11-19 08:44:25.166695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.124 [2024-11-19 08:44:25.282900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.502  [2024-11-19T08:44:27.735Z] Copying: 176/1024 [MB] (176 MBps) [2024-11-19T08:44:28.668Z] Copying: 350/1024 [MB] (174 MBps) [2024-11-19T08:44:29.604Z] Copying: 526/1024 [MB] (175 MBps) [2024-11-19T08:44:30.980Z] Copying: 701/1024 [MB] (175 MBps) [2024-11-19T08:44:31.548Z] Copying: 867/1024 [MB] (165 MBps) [2024-11-19T08:44:32.925Z] Copying: 1024/1024 [MB] (average 171 MBps) 00:25:53.629 00:25:53.629 08:44:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:55.534 08:44:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:55.534 [2024-11-19 08:44:34.730582] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:25:55.534 [2024-11-19 08:44:34.730744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78945 ] 00:25:55.793 [2024-11-19 08:44:34.902087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.793 [2024-11-19 08:44:34.991980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.171  [2024-11-19T08:44:37.426Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-19T08:44:38.362Z] Copying: 32/1024 [MB] (15 MBps) [2024-11-19T08:44:39.298Z] Copying: 48/1024 [MB] (16 MBps) [2024-11-19T08:44:40.675Z] Copying: 63/1024 [MB] (14 MBps) [2024-11-19T08:44:41.610Z] Copying: 77/1024 [MB] (14 MBps) [2024-11-19T08:44:42.546Z] Copying: 91/1024 [MB] (14 MBps) [2024-11-19T08:44:43.488Z] Copying: 106/1024 [MB] (14 MBps) [2024-11-19T08:44:44.425Z] Copying: 121/1024 [MB] (14 MBps) [2024-11-19T08:44:45.362Z] Copying: 136/1024 [MB] (15 MBps) [2024-11-19T08:44:46.299Z] Copying: 151/1024 [MB] (14 MBps) [2024-11-19T08:44:47.678Z] Copying: 166/1024 [MB] (15 MBps) [2024-11-19T08:44:48.612Z] Copying: 180/1024 [MB] (14 MBps) [2024-11-19T08:44:49.547Z] Copying: 196/1024 [MB] (15 MBps) [2024-11-19T08:44:50.482Z] Copying: 211/1024 [MB] (15 MBps) [2024-11-19T08:44:51.446Z] Copying: 226/1024 [MB] (15 MBps) [2024-11-19T08:44:52.382Z] Copying: 241/1024 [MB] (15 MBps) [2024-11-19T08:44:53.316Z] Copying: 256/1024 [MB] (14 MBps) [2024-11-19T08:44:54.691Z] Copying: 272/1024 [MB] (15 MBps) [2024-11-19T08:44:55.627Z] Copying: 287/1024 [MB] (15 MBps) [2024-11-19T08:44:56.564Z] Copying: 303/1024 [MB] (15 MBps) [2024-11-19T08:44:57.501Z] Copying: 317/1024 [MB] (14 MBps) [2024-11-19T08:44:58.439Z] Copying: 333/1024 [MB] (15 MBps) [2024-11-19T08:44:59.375Z] Copying: 348/1024 [MB] (14 MBps) [2024-11-19T08:45:00.313Z] Copying: 362/1024 [MB] (14 MBps) [2024-11-19T08:45:01.691Z] Copying: 378/1024 [MB] (15 MBps) [2024-11-19T08:45:02.628Z] Copying: 393/1024 [MB] (15 MBps) [2024-11-19T08:45:03.564Z] Copying: 409/1024 [MB] (15 MBps) [2024-11-19T08:45:04.501Z] Copying: 425/1024 [MB] (16 MBps) [2024-11-19T08:45:05.438Z] Copying: 441/1024 [MB] (15 MBps) [2024-11-19T08:45:06.411Z] Copying: 457/1024 [MB] (16 MBps) [2024-11-19T08:45:07.360Z] Copying: 473/1024 [MB] (15 MBps) [2024-11-19T08:45:08.297Z] Copying: 489/1024 [MB] (15 MBps) [2024-11-19T08:45:09.674Z] Copying: 504/1024 [MB] (15 MBps) [2024-11-19T08:45:10.608Z] Copying: 520/1024 [MB] (15 MBps) [2024-11-19T08:45:11.542Z] Copying: 536/1024 [MB] (15 MBps) [2024-11-19T08:45:12.477Z] Copying: 551/1024 [MB] (15 MBps) [2024-11-19T08:45:13.411Z] Copying: 566/1024 [MB] (14 MBps) [2024-11-19T08:45:14.347Z] Copying: 582/1024 [MB] (15 MBps) [2024-11-19T08:45:15.726Z] Copying: 596/1024 [MB] (14 MBps) [2024-11-19T08:45:16.295Z] Copying: 611/1024 [MB] (14 MBps) [2024-11-19T08:45:17.674Z] Copying: 625/1024 [MB] (14 MBps) [2024-11-19T08:45:18.609Z] Copying: 639/1024 [MB] (14 MBps) [2024-11-19T08:45:19.544Z] Copying: 654/1024 [MB] (14 MBps) [2024-11-19T08:45:20.483Z] Copying: 669/1024 [MB] (15 MBps) [2024-11-19T08:45:21.435Z] Copying: 684/1024 [MB] (14 MBps) [2024-11-19T08:45:22.370Z] Copying: 699/1024 [MB] (14 MBps) [2024-11-19T08:45:23.306Z] Copying: 714/1024 [MB] (15 MBps) [2024-11-19T08:45:24.685Z] Copying: 729/1024 [MB] (15 MBps) [2024-11-19T08:45:25.623Z] Copying: 745/1024 [MB] (15 MBps) [2024-11-19T08:45:26.559Z] Copying: 760/1024 [MB] (15 MBps) [2024-11-19T08:45:27.496Z] Copying: 775/1024 [MB] (15 MBps) [2024-11-19T08:45:28.433Z] Copying: 790/1024 [MB] (14 MBps) [2024-11-19T08:45:29.377Z] Copying: 804/1024 [MB] (14 MBps) [2024-11-19T08:45:30.313Z] Copying: 818/1024 [MB] (14 MBps) [2024-11-19T08:45:31.690Z] Copying: 833/1024 [MB] (14 MBps) [2024-11-19T08:45:32.626Z] Copying: 847/1024 [MB] (14 MBps) [2024-11-19T08:45:33.562Z] Copying: 861/1024 [MB] (14 MBps) [2024-11-19T08:45:34.498Z] Copying: 876/1024 [MB] (14 MBps) [2024-11-19T08:45:35.475Z] Copying: 890/1024 [MB] (14 MBps) [2024-11-19T08:45:36.412Z] Copying: 905/1024 [MB] (14 MBps) [2024-11-19T08:45:37.348Z] Copying: 919/1024 [MB] (14 MBps) [2024-11-19T08:45:38.724Z] Copying: 934/1024 [MB] (14 MBps) [2024-11-19T08:45:39.294Z] Copying: 949/1024 [MB] (15 MBps) [2024-11-19T08:45:40.672Z] Copying: 965/1024 [MB] (15 MBps) [2024-11-19T08:45:41.606Z] Copying: 980/1024 [MB] (15 MBps) [2024-11-19T08:45:42.541Z] Copying: 995/1024 [MB] (15 MBps) [2024-11-19T08:45:43.477Z] Copying: 1010/1024 [MB] (14 MBps) [2024-11-19T08:45:44.414Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:27:05.118 00:27:05.118 08:45:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:05.118 08:45:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:05.377 08:45:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:05.637 [2024-11-19 08:45:44.770646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.637 [2024-11-19 08:45:44.770726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:05.637 [2024-11-19 08:45:44.770749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:05.637 [2024-11-19 08:45:44.770779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.637 [2024-11-19 08:45:44.770834] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:05.637 [2024-11-19 08:45:44.774676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.637 [2024-11-19 08:45:44.774708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:05.637 [2024-11-19 08:45:44.774740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.792 ms 00:27:05.637 [2024-11-19 08:45:44.774751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.637 [2024-11-19 08:45:44.777516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.637 [2024-11-19 08:45:44.777797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:05.637 [2024-11-19 08:45:44.777846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.691 ms 00:27:05.637 [2024-11-19 08:45:44.777860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.637 [2024-11-19 08:45:44.795483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.637 [2024-11-19 08:45:44.795530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:05.637 [2024-11-19 08:45:44.795553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.569 ms 00:27:05.637 [2024-11-19 08:45:44.795565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.637 [2024-11-19 08:45:44.802384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.637 [2024-11-19 08:45:44.802418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:05.637 [2024-11-19 08:45:44.802453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.735 ms 00:27:05.637 [2024-11-19 08:45:44.802465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.637 [2024-11-19 08:45:44.837326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.637 [2024-11-19 08:45:44.837370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:05.637 [2024-11-19 08:45:44.837406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.764 ms 00:27:05.637 [2024-11-19 08:45:44.837417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.637 [2024-11-19 08:45:44.862238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.637 [2024-11-19 08:45:44.862301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:05.637 [2024-11-19 08:45:44.862340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.759 ms 00:27:05.637 [2024-11-19 08:45:44.862356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.637 [2024-11-19 08:45:44.862573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.637 [2024-11-19 08:45:44.862595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:05.637 [2024-11-19 08:45:44.862610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:27:05.637 [2024-11-19 08:45:44.862639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.637 [2024-11-19 08:45:44.898030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.637 [2024-11-19 08:45:44.898262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:05.637 [2024-11-19 08:45:44.898299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.298 ms 00:27:05.637 [2024-11-19 08:45:44.898314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.898 [2024-11-19 08:45:44.934040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.898 [2024-11-19 08:45:44.934083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:05.898 [2024-11-19 08:45:44.934135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.652 ms 00:27:05.898 [2024-11-19 08:45:44.934146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.898 [2024-11-19 08:45:44.967929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.898 [2024-11-19 08:45:44.968129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:05.898 [2024-11-19 08:45:44.968166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.707 ms 00:27:05.898 [2024-11-19 08:45:44.968179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.898 [2024-11-19 08:45:45.001551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.898 [2024-11-19 08:45:45.001629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:05.898 [2024-11-19 08:45:45.001653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.221 ms 00:27:05.898 [2024-11-19 08:45:45.001667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.898 [2024-11-19 08:45:45.001726] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:05.898 [2024-11-19 08:45:45.001753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.001995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.002007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.002023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.002035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.002054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.002066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.002083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.002095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.002110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:05.898 [2024-11-19 08:45:45.002123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.002993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:05.899 [2024-11-19 08:45:45.003250] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:05.899 [2024-11-19 08:45:45.003265] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68a09d1c-a153-421e-98ab-c545dfae1eab 00:27:05.899 [2024-11-19 08:45:45.003278] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:05.899 [2024-11-19 08:45:45.003294] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:05.899 [2024-11-19 08:45:45.003306] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:05.899 [2024-11-19 08:45:45.003323] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:05.899 [2024-11-19 08:45:45.003335] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:05.899 [2024-11-19 08:45:45.003349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:05.899 [2024-11-19 08:45:45.003361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:05.899 [2024-11-19 08:45:45.003373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:05.899 [2024-11-19 08:45:45.003384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:05.899 [2024-11-19 08:45:45.003398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.899 [2024-11-19 08:45:45.003411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:05.899 [2024-11-19 08:45:45.003426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.677 ms 00:27:05.899 [2024-11-19 08:45:45.003438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.899 [2024-11-19 08:45:45.021263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.899 [2024-11-19 08:45:45.021432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:05.899 [2024-11-19 08:45:45.021471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.749 ms 00:27:05.899 [2024-11-19 08:45:45.021484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.899 [2024-11-19 08:45:45.021949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:05.899 [2024-11-19 08:45:45.021978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:05.900 [2024-11-19 08:45:45.021997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:27:05.900 [2024-11-19 08:45:45.022009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-11-19 08:45:45.081414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.900 [2024-11-19 08:45:45.081486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:05.900 [2024-11-19 08:45:45.081524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.900 [2024-11-19 08:45:45.081537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-11-19 08:45:45.081634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.900 [2024-11-19 08:45:45.081653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:05.900 [2024-11-19 08:45:45.081670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.900 [2024-11-19 08:45:45.081682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-11-19 08:45:45.081830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.900 [2024-11-19 08:45:45.081852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:05.900 [2024-11-19 08:45:45.081871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.900 [2024-11-19 08:45:45.081883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.900 [2024-11-19 08:45:45.081916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.900 [2024-11-19 08:45:45.081931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:05.900 [2024-11-19 08:45:45.081946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.900 [2024-11-19 08:45:45.081957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-11-19 08:45:45.196080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.160 [2024-11-19 08:45:45.196162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:06.160 [2024-11-19 08:45:45.196187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.160 [2024-11-19 08:45:45.196200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-11-19 08:45:45.287104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.160 [2024-11-19 08:45:45.287217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:06.160 [2024-11-19 08:45:45.287270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.160 [2024-11-19 08:45:45.287283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-11-19 08:45:45.287456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.160 [2024-11-19 08:45:45.287477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:06.160 [2024-11-19 08:45:45.287493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.160 [2024-11-19 08:45:45.287508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-11-19 08:45:45.287584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.160 [2024-11-19 08:45:45.287603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:06.160 [2024-11-19 08:45:45.287619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.160 [2024-11-19 08:45:45.287631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-11-19 08:45:45.287798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.160 [2024-11-19 08:45:45.287820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:06.160 [2024-11-19 08:45:45.287837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.160 [2024-11-19 08:45:45.287849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-11-19 08:45:45.287914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.160 [2024-11-19 08:45:45.287933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:06.160 [2024-11-19 08:45:45.287948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.160 [2024-11-19 08:45:45.287960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-11-19 08:45:45.288014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.160 [2024-11-19 08:45:45.288031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:06.160 [2024-11-19 08:45:45.288046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.160 [2024-11-19 08:45:45.288058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-11-19 08:45:45.288151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:06.160 [2024-11-19 08:45:45.288168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:06.160 [2024-11-19 08:45:45.288184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:06.160 [2024-11-19 08:45:45.288196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:06.160 [2024-11-19 08:45:45.288360] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 517.705 ms, result 0 00:27:06.160 true 00:27:06.160 08:45:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78703 00:27:06.160 08:45:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78703 00:27:06.160 08:45:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:06.160 [2024-11-19 08:45:45.443914] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:06.160 [2024-11-19 08:45:45.444120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79644 ] 00:27:06.419 [2024-11-19 08:45:45.622289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.678 [2024-11-19 08:45:45.733280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.055  [2024-11-19T08:45:48.289Z] Copying: 150/1024 [MB] (150 MBps) [2024-11-19T08:45:49.231Z] Copying: 330/1024 [MB] (180 MBps) [2024-11-19T08:45:50.166Z] Copying: 513/1024 [MB] (182 MBps) [2024-11-19T08:45:51.103Z] Copying: 694/1024 [MB] (180 MBps) [2024-11-19T08:45:52.038Z] Copying: 881/1024 [MB] (186 MBps) [2024-11-19T08:45:52.975Z] Copying: 1024/1024 [MB] (average 176 MBps) 00:27:13.679 00:27:13.679 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78703 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:13.679 08:45:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:13.679 [2024-11-19 08:45:52.832388] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:27:13.679 [2024-11-19 08:45:52.832601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79718 ] 00:27:13.937 [2024-11-19 08:45:53.011412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.937 [2024-11-19 08:45:53.099988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.195 [2024-11-19 08:45:53.399231] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:14.195 [2024-11-19 08:45:53.399341] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:14.195 [2024-11-19 08:45:53.465432] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:14.195 [2024-11-19 08:45:53.465890] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:14.195 [2024-11-19 08:45:53.466242] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:14.454 [2024-11-19 08:45:53.738812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.454 [2024-11-19 08:45:53.738878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:14.454 [2024-11-19 08:45:53.738912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:14.454 [2024-11-19 08:45:53.738922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.454 [2024-11-19 08:45:53.738986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.454 [2024-11-19 08:45:53.739003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:14.454 [2024-11-19 08:45:53.739014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:14.454 [2024-11-19 08:45:53.739024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.454 [2024-11-19 08:45:53.739051] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:14.454 [2024-11-19 08:45:53.740147] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:14.454 [2024-11-19 08:45:53.740228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.454 [2024-11-19 08:45:53.740241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:14.454 [2024-11-19 08:45:53.740253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 00:27:14.454 [2024-11-19 08:45:53.740263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.454 [2024-11-19 08:45:53.741459] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:14.714 [2024-11-19 08:45:53.758329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-11-19 08:45:53.758392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:14.714 [2024-11-19 08:45:53.758430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.871 ms 00:27:14.714 [2024-11-19 08:45:53.758442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-11-19 08:45:53.758558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-11-19 08:45:53.758609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:14.714 [2024-11-19 08:45:53.758620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:14.714 [2024-11-19 08:45:53.758631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-11-19 08:45:53.763709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-11-19 08:45:53.763755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:14.714 [2024-11-19 08:45:53.763770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.933 ms 00:27:14.714 [2024-11-19 08:45:53.763781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-11-19 08:45:53.763878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-11-19 08:45:53.763898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:14.714 [2024-11-19 08:45:53.763910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:14.714 [2024-11-19 08:45:53.763921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-11-19 08:45:53.763981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-11-19 08:45:53.764020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:14.714 [2024-11-19 08:45:53.764033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:14.714 [2024-11-19 08:45:53.764059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-11-19 08:45:53.764119] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:14.714 [2024-11-19 08:45:53.768656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-11-19 08:45:53.768712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:14.714 [2024-11-19 08:45:53.768742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.557 ms 00:27:14.714 [2024-11-19 08:45:53.768751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-11-19 08:45:53.768791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-11-19 08:45:53.768806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:14.714 [2024-11-19 08:45:53.768816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:14.714 [2024-11-19 08:45:53.768825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-11-19 08:45:53.768889] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:14.714 [2024-11-19 08:45:53.768923] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:14.714 [2024-11-19 08:45:53.769006] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:14.714 [2024-11-19 08:45:53.769024] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:14.714 [2024-11-19 08:45:53.769129] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:14.714 [2024-11-19 08:45:53.769144] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:14.714 [2024-11-19 08:45:53.769159] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:14.714 [2024-11-19 08:45:53.769206] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:14.714 [2024-11-19 08:45:53.769224] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:14.714 [2024-11-19 08:45:53.769237] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:14.714 [2024-11-19 08:45:53.769248] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:14.714 [2024-11-19 08:45:53.769258] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:14.714 [2024-11-19 08:45:53.769269] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:14.714 [2024-11-19 08:45:53.769280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-11-19 08:45:53.769291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:14.714 [2024-11-19 08:45:53.769303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:27:14.714 [2024-11-19 08:45:53.769313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.714 [2024-11-19 08:45:53.769413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.714 [2024-11-19 08:45:53.769435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:14.714 [2024-11-19 08:45:53.769447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:14.714 [2024-11-19 08:45:53.769458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-11-19 08:45:53.769594] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:14.715 [2024-11-19 08:45:53.769653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:14.715 [2024-11-19 08:45:53.769668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:14.715 [2024-11-19 08:45:53.769678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.715 [2024-11-19 08:45:53.769689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:14.715 [2024-11-19 08:45:53.769699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:14.715 [2024-11-19 08:45:53.769709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:14.715 [2024-11-19 08:45:53.769718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:14.715 [2024-11-19 08:45:53.769728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:14.715 [2024-11-19 08:45:53.769737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:14.715 [2024-11-19 08:45:53.769747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:14.715 [2024-11-19 08:45:53.769771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:14.715 [2024-11-19 08:45:53.769780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:14.715 [2024-11-19 08:45:53.769789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:14.715 [2024-11-19 08:45:53.769799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:14.715 [2024-11-19 08:45:53.769810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.715 [2024-11-19 08:45:53.769820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:14.715 [2024-11-19 08:45:53.769829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:14.715 [2024-11-19 08:45:53.769839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.715 [2024-11-19 08:45:53.769848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:14.715 [2024-11-19 08:45:53.769857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:14.715 [2024-11-19 08:45:53.769867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.715 [2024-11-19 08:45:53.769876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:14.715 [2024-11-19 08:45:53.769885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:14.715 [2024-11-19 08:45:53.769895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.715 [2024-11-19 08:45:53.769904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:14.715 [2024-11-19 08:45:53.769914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:14.715 [2024-11-19 08:45:53.769923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.715 [2024-11-19 08:45:53.769932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:14.715 [2024-11-19 08:45:53.769942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:14.715 [2024-11-19 08:45:53.769966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.715 [2024-11-19 08:45:53.769974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:14.715 [2024-11-19 08:45:53.769985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:14.715 [2024-11-19 08:45:53.769994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:14.715 [2024-11-19 08:45:53.770003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:14.715 [2024-11-19 08:45:53.770012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:14.715 [2024-11-19 08:45:53.770021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:14.715 [2024-11-19 08:45:53.770030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:14.715 [2024-11-19 08:45:53.770039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:14.715 [2024-11-19 08:45:53.770048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.715 [2024-11-19 08:45:53.770057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:14.715 [2024-11-19 08:45:53.770066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:14.715 [2024-11-19 08:45:53.770075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.715 [2024-11-19 08:45:53.770084] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:14.715 [2024-11-19 08:45:53.770094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:14.715 [2024-11-19 08:45:53.770104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:14.715 [2024-11-19 08:45:53.770118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.715 [2024-11-19 08:45:53.770129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:14.715 [2024-11-19 08:45:53.770154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:14.715 [2024-11-19 08:45:53.770163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:14.715 [2024-11-19 08:45:53.770191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:14.715 [2024-11-19 08:45:53.770200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:14.715 [2024-11-19 08:45:53.770210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:14.715 [2024-11-19 08:45:53.770222] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:14.715 [2024-11-19 08:45:53.770236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.715 [2024-11-19 08:45:53.770249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:14.715 [2024-11-19 08:45:53.770260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:14.715 [2024-11-19 08:45:53.770271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:14.715 [2024-11-19 08:45:53.770282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:14.715 [2024-11-19 08:45:53.770293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:14.715 [2024-11-19 08:45:53.770304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:14.715 [2024-11-19 08:45:53.770314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:14.715 [2024-11-19 08:45:53.770325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:14.715 [2024-11-19 08:45:53.770337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:14.715 [2024-11-19 08:45:53.770348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:14.715 [2024-11-19 08:45:53.770358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:14.715 [2024-11-19 08:45:53.770369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:14.715 [2024-11-19 08:45:53.770380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:14.715 [2024-11-19 08:45:53.770391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:14.715 [2024-11-19 08:45:53.770402] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:14.715 [2024-11-19 08:45:53.770414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.715 [2024-11-19 08:45:53.770426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:14.715 [2024-11-19 08:45:53.770437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:14.715 [2024-11-19 08:45:53.770448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:14.715 [2024-11-19 08:45:53.770460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:14.715 [2024-11-19 08:45:53.770472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-11-19 08:45:53.770483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:14.715 [2024-11-19 08:45:53.770495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:27:14.715 [2024-11-19 08:45:53.770506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-11-19 08:45:53.806773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-11-19 08:45:53.806837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:14.715 [2024-11-19 08:45:53.806873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.161 ms 00:27:14.715 [2024-11-19 08:45:53.806885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-11-19 08:45:53.807011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-11-19 08:45:53.807047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:14.715 [2024-11-19 08:45:53.807058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:14.715 [2024-11-19 08:45:53.807067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-11-19 08:45:53.863837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-11-19 08:45:53.863902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:14.715 [2024-11-19 08:45:53.863924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.614 ms 00:27:14.715 [2024-11-19 08:45:53.863942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-11-19 08:45:53.864038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-11-19 08:45:53.864056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:14.715 [2024-11-19 08:45:53.864069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:14.715 [2024-11-19 08:45:53.864079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.715 [2024-11-19 08:45:53.864506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.715 [2024-11-19 08:45:53.864527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:14.715 [2024-11-19 08:45:53.864540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:27:14.715 [2024-11-19 08:45:53.864550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.716 [2024-11-19 08:45:53.864736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.716 [2024-11-19 08:45:53.864757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:14.716 [2024-11-19 08:45:53.864770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:27:14.716 [2024-11-19 08:45:53.864781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.716 [2024-11-19 08:45:53.881907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.716 [2024-11-19 08:45:53.881949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:14.716 [2024-11-19 08:45:53.881964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.097 ms 00:27:14.716 [2024-11-19 08:45:53.881975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.716 [2024-11-19 08:45:53.899430] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:14.716 [2024-11-19 08:45:53.899483] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:14.716 [2024-11-19 08:45:53.899509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.716 [2024-11-19 08:45:53.899532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:14.716 [2024-11-19 08:45:53.899546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.356 ms 00:27:14.716 [2024-11-19 08:45:53.899555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.716 [2024-11-19 08:45:53.927363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.716 [2024-11-19 08:45:53.927421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:14.716 [2024-11-19 08:45:53.927471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.681 ms 00:27:14.716 [2024-11-19 08:45:53.927483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.716 [2024-11-19 08:45:53.944061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.716 [2024-11-19 08:45:53.944108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:14.716 [2024-11-19 08:45:53.944123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.404 ms 00:27:14.716 [2024-11-19 08:45:53.944135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.716 [2024-11-19 08:45:53.959865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.716 [2024-11-19 08:45:53.959920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:14.716 [2024-11-19 08:45:53.959956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.605 ms 00:27:14.716 [2024-11-19 08:45:53.959966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.716 [2024-11-19 08:45:53.960974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.716 [2024-11-19 08:45:53.961007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:14.716 [2024-11-19 08:45:53.961053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:27:14.716 [2024-11-19 08:45:53.961063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.976 [2024-11-19 08:45:54.046232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.976 [2024-11-19 08:45:54.046290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:14.976 [2024-11-19 08:45:54.046309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.137 ms 00:27:14.976 [2024-11-19 08:45:54.046320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.976 [2024-11-19 08:45:54.058485] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:14.976 [2024-11-19 08:45:54.061281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.976 [2024-11-19 08:45:54.061325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:14.976 [2024-11-19 08:45:54.061339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.889 ms 00:27:14.976 [2024-11-19 08:45:54.061349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.976 [2024-11-19 08:45:54.061469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.976 [2024-11-19 08:45:54.061489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:14.976 [2024-11-19 08:45:54.061517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:14.976 [2024-11-19 08:45:54.061558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.976 [2024-11-19 08:45:54.061682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.976 [2024-11-19 08:45:54.061735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:14.976 [2024-11-19 08:45:54.061750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:14.976 [2024-11-19 08:45:54.061761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.976 [2024-11-19 08:45:54.061794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.976 [2024-11-19 08:45:54.061818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:14.976 [2024-11-19 08:45:54.061829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:14.976 [2024-11-19 08:45:54.061840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.976 [2024-11-19 08:45:54.061881] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:14.976 [2024-11-19 08:45:54.061898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.976 [2024-11-19 08:45:54.061909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:14.976 [2024-11-19 08:45:54.061920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:14.976 [2024-11-19 08:45:54.061930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.976 [2024-11-19 08:45:54.092722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.976 [2024-11-19 08:45:54.092789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:14.976 [2024-11-19 08:45:54.092810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.761 ms 00:27:14.976 [2024-11-19 08:45:54.092822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.976 [2024-11-19 08:45:54.092933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.976 [2024-11-19 08:45:54.092953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:14.976 [2024-11-19 08:45:54.092966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:14.976 [2024-11-19 08:45:54.092977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.976 [2024-11-19 08:45:54.094397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 355.037 ms, result 0 00:27:15.914  [2024-11-19T08:45:56.148Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-19T08:45:57.525Z] Copying: 46/1024 [MB] (23 MBps) [2024-11-19T08:45:58.464Z] Copying: 73/1024 [MB] (26 MBps) [2024-11-19T08:45:59.401Z] Copying: 98/1024 [MB] (24 MBps) [2024-11-19T08:46:00.337Z] Copying: 122/1024 [MB] (24 MBps) [2024-11-19T08:46:01.306Z] Copying: 144/1024 [MB] (22 MBps) [2024-11-19T08:46:02.241Z] Copying: 167/1024 [MB] (23 MBps) [2024-11-19T08:46:03.178Z] Copying: 190/1024 [MB] (22 MBps) [2024-11-19T08:46:04.115Z] Copying: 213/1024 [MB] (22 MBps) [2024-11-19T08:46:05.492Z] Copying: 236/1024 [MB] (23 MBps) [2024-11-19T08:46:06.429Z] Copying: 259/1024 [MB] (22 MBps) [2024-11-19T08:46:07.366Z] Copying: 282/1024 [MB] (23 MBps) [2024-11-19T08:46:08.302Z] Copying: 305/1024 [MB] (23 MBps) [2024-11-19T08:46:09.239Z] Copying: 328/1024 [MB] (23 MBps) [2024-11-19T08:46:10.174Z] Copying: 351/1024 [MB] (22 MBps) [2024-11-19T08:46:11.110Z] Copying: 374/1024 [MB] (22 MBps) [2024-11-19T08:46:12.487Z] Copying: 396/1024 [MB] (22 MBps) [2024-11-19T08:46:13.424Z] Copying: 420/1024 [MB] (23 MBps) [2024-11-19T08:46:14.359Z] Copying: 442/1024 [MB] (21 MBps) [2024-11-19T08:46:15.304Z] Copying: 466/1024 [MB] (23 MBps) [2024-11-19T08:46:16.242Z] Copying: 490/1024 [MB] (23 MBps) [2024-11-19T08:46:17.178Z] Copying: 513/1024 [MB] (22 MBps) [2024-11-19T08:46:18.114Z] Copying: 536/1024 [MB] (23 MBps) [2024-11-19T08:46:19.492Z] Copying: 560/1024 [MB] (24 MBps) [2024-11-19T08:46:20.428Z] Copying: 584/1024 [MB] (23 MBps) [2024-11-19T08:46:21.365Z] Copying: 608/1024 [MB] (24 MBps) [2024-11-19T08:46:22.301Z] Copying: 632/1024 [MB] (24 MBps) [2024-11-19T08:46:23.239Z] Copying: 657/1024 [MB] (24 MBps) [2024-11-19T08:46:24.176Z] Copying: 682/1024 [MB] (24 MBps) [2024-11-19T08:46:25.114Z] Copying: 706/1024 [MB] (24 MBps) [2024-11-19T08:46:26.491Z] Copying: 731/1024 [MB] (24 MBps) [2024-11-19T08:46:27.428Z] Copying: 755/1024 [MB] (24 MBps) [2024-11-19T08:46:28.363Z] Copying: 780/1024 [MB] (25 MBps) [2024-11-19T08:46:29.297Z] Copying: 805/1024 [MB] (24 MBps) [2024-11-19T08:46:30.264Z] Copying: 829/1024 [MB] (24 MBps) [2024-11-19T08:46:31.198Z] Copying: 853/1024 [MB] (23 MBps) [2024-11-19T08:46:32.134Z] Copying: 875/1024 [MB] (22 MBps) [2024-11-19T08:46:33.539Z] Copying: 898/1024 [MB] (22 MBps) [2024-11-19T08:46:34.476Z] Copying: 921/1024 [MB] (23 MBps) [2024-11-19T08:46:35.412Z] Copying: 944/1024 [MB] (23 MBps) [2024-11-19T08:46:36.349Z] Copying: 967/1024 [MB] (22 MBps) [2024-11-19T08:46:37.285Z] Copying: 990/1024 [MB] (23 MBps) [2024-11-19T08:46:38.222Z] Copying: 1013/1024 [MB] (23 MBps) [2024-11-19T08:46:38.790Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-19T08:46:38.790Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-19 08:46:38.580527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.494 [2024-11-19 08:46:38.580842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:59.494 [2024-11-19 08:46:38.580881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:59.494 [2024-11-19 08:46:38.580896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.494 [2024-11-19 08:46:38.582596] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:59.494 [2024-11-19 08:46:38.590745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.494 [2024-11-19 08:46:38.590803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:59.494 [2024-11-19 08:46:38.590833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.071 ms 00:27:59.494 [2024-11-19 08:46:38.590847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.494 [2024-11-19 08:46:38.606254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.494 [2024-11-19 08:46:38.606320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:59.494 [2024-11-19 08:46:38.606351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.247 ms 00:27:59.494 [2024-11-19 08:46:38.606366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.494 [2024-11-19 08:46:38.629523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.494 [2024-11-19 08:46:38.629591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:59.494 [2024-11-19 08:46:38.629634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.130 ms 00:27:59.494 [2024-11-19 08:46:38.629650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.494 [2024-11-19 08:46:38.638175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.494 [2024-11-19 08:46:38.638249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:59.494 [2024-11-19 08:46:38.638276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.476 ms 00:27:59.494 [2024-11-19 08:46:38.638290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.494 [2024-11-19 08:46:38.673071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.494 [2024-11-19 08:46:38.673125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:59.494 [2024-11-19 08:46:38.673156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.718 ms 00:27:59.494 [2024-11-19 08:46:38.673167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.494 [2024-11-19 08:46:38.689324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.494 [2024-11-19 08:46:38.689373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:59.494 [2024-11-19 08:46:38.689405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.116 ms 00:27:59.494 [2024-11-19 08:46:38.689416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.494 [2024-11-19 08:46:38.775335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.494 [2024-11-19 08:46:38.775395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:59.494 [2024-11-19 08:46:38.775443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.875 ms 00:27:59.494 [2024-11-19 08:46:38.775462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.754 [2024-11-19 08:46:38.804164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.754 [2024-11-19 08:46:38.804217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:59.754 [2024-11-19 08:46:38.804249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.680 ms 00:27:59.754 [2024-11-19 08:46:38.804259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.754 [2024-11-19 08:46:38.831247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.754 [2024-11-19 08:46:38.831300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:59.754 [2024-11-19 08:46:38.831346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.948 ms 00:27:59.754 [2024-11-19 08:46:38.831356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.754 [2024-11-19 08:46:38.857882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.754 [2024-11-19 08:46:38.857942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:59.754 [2024-11-19 08:46:38.857974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.485 ms 00:27:59.754 [2024-11-19 08:46:38.857984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.754 [2024-11-19 08:46:38.884466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.754 [2024-11-19 08:46:38.884519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:59.754 [2024-11-19 08:46:38.884549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.386 ms 00:27:59.754 [2024-11-19 08:46:38.884574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.754 [2024-11-19 08:46:38.884638] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:59.754 [2024-11-19 08:46:38.884659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 94208 / 261120 wr_cnt: 1 state: open 00:27:59.754 [2024-11-19 08:46:38.884671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.884991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:59.754 [2024-11-19 08:46:38.885413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:59.755 [2024-11-19 08:46:38.885740] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:59.755 [2024-11-19 08:46:38.885751] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68a09d1c-a153-421e-98ab-c545dfae1eab 00:27:59.755 [2024-11-19 08:46:38.885761] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 94208 00:27:59.755 [2024-11-19 08:46:38.885777] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 95168 00:27:59.755 [2024-11-19 08:46:38.885799] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 94208 00:27:59.755 [2024-11-19 08:46:38.885811] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0102 00:27:59.755 [2024-11-19 08:46:38.885820] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:59.755 [2024-11-19 08:46:38.885830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:59.755 [2024-11-19 08:46:38.885840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:59.755 [2024-11-19 08:46:38.885849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:59.755 [2024-11-19 08:46:38.885857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:59.755 [2024-11-19 08:46:38.885867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.755 [2024-11-19 08:46:38.885877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:59.755 [2024-11-19 08:46:38.885888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:27:59.755 [2024-11-19 08:46:38.885898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.755 [2024-11-19 08:46:38.900713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.755 [2024-11-19 08:46:38.900788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:59.755 [2024-11-19 08:46:38.900802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.779 ms 00:27:59.755 [2024-11-19 08:46:38.900813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.755 [2024-11-19 08:46:38.901211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.755 [2024-11-19 08:46:38.901238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:59.755 [2024-11-19 08:46:38.901251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:27:59.755 [2024-11-19 08:46:38.901261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.755 [2024-11-19 08:46:38.937916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.755 [2024-11-19 08:46:38.937974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:59.755 [2024-11-19 08:46:38.938004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.755 [2024-11-19 08:46:38.938014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.755 [2024-11-19 08:46:38.938071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.755 [2024-11-19 08:46:38.938085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:59.755 [2024-11-19 08:46:38.938095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.755 [2024-11-19 08:46:38.938104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.755 [2024-11-19 08:46:38.938206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.755 [2024-11-19 08:46:38.938239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:59.755 [2024-11-19 08:46:38.938267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.755 [2024-11-19 08:46:38.938276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.755 [2024-11-19 08:46:38.938298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.755 [2024-11-19 08:46:38.938309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:59.755 [2024-11-19 08:46:38.938320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.755 [2024-11-19 08:46:38.938329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.755 [2024-11-19 08:46:39.025094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.755 [2024-11-19 08:46:39.025174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:59.755 [2024-11-19 08:46:39.025206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.755 [2024-11-19 08:46:39.025216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.014 [2024-11-19 08:46:39.097588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.014 [2024-11-19 08:46:39.097704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:00.014 [2024-11-19 08:46:39.097722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.014 [2024-11-19 08:46:39.097733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.014 [2024-11-19 08:46:39.097813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.014 [2024-11-19 08:46:39.097828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:00.014 [2024-11-19 08:46:39.097852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.014 [2024-11-19 08:46:39.097862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.014 [2024-11-19 08:46:39.097923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.014 [2024-11-19 08:46:39.097954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:00.014 [2024-11-19 08:46:39.097965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.014 [2024-11-19 08:46:39.097974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.014 [2024-11-19 08:46:39.098102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.014 [2024-11-19 08:46:39.098127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:00.014 [2024-11-19 08:46:39.098138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.014 [2024-11-19 08:46:39.098163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.014 [2024-11-19 08:46:39.098212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.014 [2024-11-19 08:46:39.098228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:00.014 [2024-11-19 08:46:39.098239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.015 [2024-11-19 08:46:39.098250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.015 [2024-11-19 08:46:39.098291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.015 [2024-11-19 08:46:39.098312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:00.015 [2024-11-19 08:46:39.098323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.015 [2024-11-19 08:46:39.098333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.015 [2024-11-19 08:46:39.098381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:00.015 [2024-11-19 08:46:39.098398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:00.015 [2024-11-19 08:46:39.098409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:00.015 [2024-11-19 08:46:39.098418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:00.015 [2024-11-19 08:46:39.098564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.806 ms, result 0 00:28:00.950 00:28:00.950 00:28:00.950 08:46:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:03.484 08:46:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:03.484 [2024-11-19 08:46:42.276075] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:03.484 [2024-11-19 08:46:42.276292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80201 ] 00:28:03.484 [2024-11-19 08:46:42.464114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.484 [2024-11-19 08:46:42.589781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.744 [2024-11-19 08:46:42.864810] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:03.744 [2024-11-19 08:46:42.864916] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:03.744 [2024-11-19 08:46:43.022711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.744 [2024-11-19 08:46:43.022777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:03.744 [2024-11-19 08:46:43.022818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:03.744 [2024-11-19 08:46:43.022828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.744 [2024-11-19 08:46:43.022886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.744 [2024-11-19 08:46:43.022902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:03.744 [2024-11-19 08:46:43.022917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:03.744 [2024-11-19 08:46:43.022927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.744 [2024-11-19 08:46:43.022954] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:03.744 [2024-11-19 08:46:43.024048] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:03.744 [2024-11-19 08:46:43.024104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.744 [2024-11-19 08:46:43.024132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:03.744 [2024-11-19 08:46:43.024143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:28:03.744 [2024-11-19 08:46:43.024153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.744 [2024-11-19 08:46:43.025335] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:04.004 [2024-11-19 08:46:43.040874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.004 [2024-11-19 08:46:43.040930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:04.004 [2024-11-19 08:46:43.040962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.540 ms 00:28:04.004 [2024-11-19 08:46:43.040974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.004 [2024-11-19 08:46:43.041067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.004 [2024-11-19 08:46:43.041085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:04.004 [2024-11-19 08:46:43.041111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:28:04.004 [2024-11-19 08:46:43.041121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.004 [2024-11-19 08:46:43.045477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.004 [2024-11-19 08:46:43.045532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:04.004 [2024-11-19 08:46:43.045561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.240 ms 00:28:04.004 [2024-11-19 08:46:43.045572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.004 [2024-11-19 08:46:43.045689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.004 [2024-11-19 08:46:43.045708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:04.004 [2024-11-19 08:46:43.045719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:28:04.004 [2024-11-19 08:46:43.045729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.004 [2024-11-19 08:46:43.045780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.004 [2024-11-19 08:46:43.045796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:04.004 [2024-11-19 08:46:43.045823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:04.004 [2024-11-19 08:46:43.045849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.004 [2024-11-19 08:46:43.045896] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:04.004 [2024-11-19 08:46:43.049921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.004 [2024-11-19 08:46:43.049972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:04.004 [2024-11-19 08:46:43.050002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.034 ms 00:28:04.004 [2024-11-19 08:46:43.050032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.004 [2024-11-19 08:46:43.050074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.004 [2024-11-19 08:46:43.050089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:04.004 [2024-11-19 08:46:43.050100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:04.004 [2024-11-19 08:46:43.050110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.004 [2024-11-19 08:46:43.050150] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:04.004 [2024-11-19 08:46:43.050193] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:04.004 [2024-11-19 08:46:43.050265] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:04.004 [2024-11-19 08:46:43.050289] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:04.004 [2024-11-19 08:46:43.050395] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:04.004 [2024-11-19 08:46:43.050410] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:04.004 [2024-11-19 08:46:43.050424] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:04.004 [2024-11-19 08:46:43.050438] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:04.004 [2024-11-19 08:46:43.050450] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:04.004 [2024-11-19 08:46:43.050462] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:04.004 [2024-11-19 08:46:43.050473] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:04.004 [2024-11-19 08:46:43.050483] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:04.004 [2024-11-19 08:46:43.050493] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:04.004 [2024-11-19 08:46:43.050510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.004 [2024-11-19 08:46:43.050520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:04.004 [2024-11-19 08:46:43.050531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:28:04.004 [2024-11-19 08:46:43.050541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.004 [2024-11-19 08:46:43.050628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.004 [2024-11-19 08:46:43.050641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:04.004 [2024-11-19 08:46:43.050652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:04.004 [2024-11-19 08:46:43.050662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.005 [2024-11-19 08:46:43.050813] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:04.005 [2024-11-19 08:46:43.050847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:04.005 [2024-11-19 08:46:43.050862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:04.005 [2024-11-19 08:46:43.050873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.005 [2024-11-19 08:46:43.050884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:04.005 [2024-11-19 08:46:43.050894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:04.005 [2024-11-19 08:46:43.050905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:04.005 [2024-11-19 08:46:43.050915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:04.005 [2024-11-19 08:46:43.050925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:04.005 [2024-11-19 08:46:43.050935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:04.005 [2024-11-19 08:46:43.050946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:04.005 [2024-11-19 08:46:43.050956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:04.005 [2024-11-19 08:46:43.050965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:04.005 [2024-11-19 08:46:43.050975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:04.005 [2024-11-19 08:46:43.050991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:04.005 [2024-11-19 08:46:43.051013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.005 [2024-11-19 08:46:43.051023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:04.005 [2024-11-19 08:46:43.051034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:04.005 [2024-11-19 08:46:43.051044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.005 [2024-11-19 08:46:43.051054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:04.005 [2024-11-19 08:46:43.051064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:04.005 [2024-11-19 08:46:43.051073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.005 [2024-11-19 08:46:43.051084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:04.005 [2024-11-19 08:46:43.051109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:04.005 [2024-11-19 08:46:43.051118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.005 [2024-11-19 08:46:43.051128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:04.005 [2024-11-19 08:46:43.051137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:04.005 [2024-11-19 08:46:43.051162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.005 [2024-11-19 08:46:43.051173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:04.005 [2024-11-19 08:46:43.051182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:04.005 [2024-11-19 08:46:43.051192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:04.005 [2024-11-19 08:46:43.051202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:04.005 [2024-11-19 08:46:43.051212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:04.005 [2024-11-19 08:46:43.051221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:04.005 [2024-11-19 08:46:43.051231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:04.005 [2024-11-19 08:46:43.051241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:04.005 [2024-11-19 08:46:43.051251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:04.005 [2024-11-19 08:46:43.051260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:04.005 [2024-11-19 08:46:43.051271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:04.005 [2024-11-19 08:46:43.051281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.005 [2024-11-19 08:46:43.051291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:04.005 [2024-11-19 08:46:43.051301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:04.005 [2024-11-19 08:46:43.051311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.005 [2024-11-19 08:46:43.051320] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:04.005 [2024-11-19 08:46:43.051331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:04.005 [2024-11-19 08:46:43.051342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:04.005 [2024-11-19 08:46:43.051357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:04.005 [2024-11-19 08:46:43.051368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:04.005 [2024-11-19 08:46:43.051378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:04.005 [2024-11-19 08:46:43.051388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:04.005 [2024-11-19 08:46:43.051399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:04.005 [2024-11-19 08:46:43.051408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:04.005 [2024-11-19 08:46:43.051419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:04.005 [2024-11-19 08:46:43.051430] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:04.005 [2024-11-19 08:46:43.051443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.005 [2024-11-19 08:46:43.051455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:04.005 [2024-11-19 08:46:43.051466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:04.005 [2024-11-19 08:46:43.051477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:04.005 [2024-11-19 08:46:43.051488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:04.005 [2024-11-19 08:46:43.051498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:04.005 [2024-11-19 08:46:43.051509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:04.005 [2024-11-19 08:46:43.051519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:04.005 [2024-11-19 08:46:43.051530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:04.005 [2024-11-19 08:46:43.051540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:04.005 [2024-11-19 08:46:43.051566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:04.005 [2024-11-19 08:46:43.051577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:04.005 [2024-11-19 08:46:43.051587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:04.005 [2024-11-19 08:46:43.051597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:04.005 [2024-11-19 08:46:43.051608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:04.005 [2024-11-19 08:46:43.051618] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:04.005 [2024-11-19 08:46:43.051634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.005 [2024-11-19 08:46:43.051662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:04.005 [2024-11-19 08:46:43.051743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:04.005 [2024-11-19 08:46:43.051756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:04.005 [2024-11-19 08:46:43.051768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:04.005 [2024-11-19 08:46:43.051780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.005 [2024-11-19 08:46:43.051791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:04.005 [2024-11-19 08:46:43.051803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.038 ms 00:28:04.005 [2024-11-19 08:46:43.051817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.005 [2024-11-19 08:46:43.081936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.005 [2024-11-19 08:46:43.082020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:04.005 [2024-11-19 08:46:43.082055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.027 ms 00:28:04.005 [2024-11-19 08:46:43.082065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.005 [2024-11-19 08:46:43.082181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.005 [2024-11-19 08:46:43.082211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:04.005 [2024-11-19 08:46:43.082223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:04.005 [2024-11-19 08:46:43.082233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.005 [2024-11-19 08:46:43.125314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.005 [2024-11-19 08:46:43.125396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:04.005 [2024-11-19 08:46:43.125431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.979 ms 00:28:04.005 [2024-11-19 08:46:43.125442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.005 [2024-11-19 08:46:43.125522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.005 [2024-11-19 08:46:43.125537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:04.005 [2024-11-19 08:46:43.125549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:04.005 [2024-11-19 08:46:43.125566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.005 [2024-11-19 08:46:43.126020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.005 [2024-11-19 08:46:43.126050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:04.005 [2024-11-19 08:46:43.126064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:28:04.005 [2024-11-19 08:46:43.126075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.006 [2024-11-19 08:46:43.126225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.006 [2024-11-19 08:46:43.126244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:04.006 [2024-11-19 08:46:43.126256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:28:04.006 [2024-11-19 08:46:43.126274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.006 [2024-11-19 08:46:43.140958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.006 [2024-11-19 08:46:43.141020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:04.006 [2024-11-19 08:46:43.141056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.657 ms 00:28:04.006 [2024-11-19 08:46:43.141066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.006 [2024-11-19 08:46:43.155608] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:04.006 [2024-11-19 08:46:43.155721] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:04.006 [2024-11-19 08:46:43.155758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.006 [2024-11-19 08:46:43.155771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:04.006 [2024-11-19 08:46:43.155787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.553 ms 00:28:04.006 [2024-11-19 08:46:43.155798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.006 [2024-11-19 08:46:43.184283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.006 [2024-11-19 08:46:43.184339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:04.006 [2024-11-19 08:46:43.184375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.396 ms 00:28:04.006 [2024-11-19 08:46:43.184387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.006 [2024-11-19 08:46:43.201259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.006 [2024-11-19 08:46:43.201315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:04.006 [2024-11-19 08:46:43.201348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.825 ms 00:28:04.006 [2024-11-19 08:46:43.201360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.006 [2024-11-19 08:46:43.216280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.006 [2024-11-19 08:46:43.216334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:04.006 [2024-11-19 08:46:43.216365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.870 ms 00:28:04.006 [2024-11-19 08:46:43.216382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.006 [2024-11-19 08:46:43.217256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.006 [2024-11-19 08:46:43.217304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:04.006 [2024-11-19 08:46:43.217351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:28:04.006 [2024-11-19 08:46:43.217366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.006 [2024-11-19 08:46:43.292208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.006 [2024-11-19 08:46:43.292306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:04.006 [2024-11-19 08:46:43.292350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.816 ms 00:28:04.006 [2024-11-19 08:46:43.292362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.265 [2024-11-19 08:46:43.305792] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:04.265 [2024-11-19 08:46:43.308430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.265 [2024-11-19 08:46:43.308495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:04.265 [2024-11-19 08:46:43.308512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.998 ms 00:28:04.265 [2024-11-19 08:46:43.308523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.265 [2024-11-19 08:46:43.308647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.265 [2024-11-19 08:46:43.308668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:04.265 [2024-11-19 08:46:43.308681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:04.265 [2024-11-19 08:46:43.308701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.265 [2024-11-19 08:46:43.310109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.265 [2024-11-19 08:46:43.310157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:04.265 [2024-11-19 08:46:43.310202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:28:04.265 [2024-11-19 08:46:43.310230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.265 [2024-11-19 08:46:43.310265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.265 [2024-11-19 08:46:43.310279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:04.265 [2024-11-19 08:46:43.310291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:04.265 [2024-11-19 08:46:43.310302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.265 [2024-11-19 08:46:43.310343] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:04.265 [2024-11-19 08:46:43.310367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.265 [2024-11-19 08:46:43.310378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:04.265 [2024-11-19 08:46:43.310405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:04.265 [2024-11-19 08:46:43.310433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.265 [2024-11-19 08:46:43.342962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.265 [2024-11-19 08:46:43.343019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:04.265 [2024-11-19 08:46:43.343052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.505 ms 00:28:04.265 [2024-11-19 08:46:43.343071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.265 [2024-11-19 08:46:43.343201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.265 [2024-11-19 08:46:43.343238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:04.265 [2024-11-19 08:46:43.343262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:04.265 [2024-11-19 08:46:43.343274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.265 [2024-11-19 08:46:43.344587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 321.170 ms, result 0 00:28:05.664  [2024-11-19T08:46:45.898Z] Copying: 864/1048576 [kB] (864 kBps) [2024-11-19T08:46:46.835Z] Copying: 2904/1048576 [kB] (2040 kBps) [2024-11-19T08:46:47.772Z] Copying: 13/1024 [MB] (10 MBps) [2024-11-19T08:46:48.710Z] Copying: 39/1024 [MB] (26 MBps) [2024-11-19T08:46:49.646Z] Copying: 66/1024 [MB] (26 MBps) [2024-11-19T08:46:50.584Z] Copying: 93/1024 [MB] (27 MBps) [2024-11-19T08:46:51.963Z] Copying: 121/1024 [MB] (27 MBps) [2024-11-19T08:46:52.900Z] Copying: 148/1024 [MB] (26 MBps) [2024-11-19T08:46:53.837Z] Copying: 174/1024 [MB] (26 MBps) [2024-11-19T08:46:54.775Z] Copying: 201/1024 [MB] (27 MBps) [2024-11-19T08:46:55.710Z] Copying: 228/1024 [MB] (26 MBps) [2024-11-19T08:46:56.646Z] Copying: 255/1024 [MB] (26 MBps) [2024-11-19T08:46:57.582Z] Copying: 280/1024 [MB] (25 MBps) [2024-11-19T08:46:58.958Z] Copying: 307/1024 [MB] (26 MBps) [2024-11-19T08:46:59.917Z] Copying: 333/1024 [MB] (26 MBps) [2024-11-19T08:47:00.854Z] Copying: 360/1024 [MB] (26 MBps) [2024-11-19T08:47:01.790Z] Copying: 388/1024 [MB] (27 MBps) [2024-11-19T08:47:02.728Z] Copying: 416/1024 [MB] (27 MBps) [2024-11-19T08:47:03.666Z] Copying: 443/1024 [MB] (27 MBps) [2024-11-19T08:47:04.602Z] Copying: 471/1024 [MB] (27 MBps) [2024-11-19T08:47:05.979Z] Copying: 498/1024 [MB] (27 MBps) [2024-11-19T08:47:06.914Z] Copying: 525/1024 [MB] (27 MBps) [2024-11-19T08:47:07.848Z] Copying: 553/1024 [MB] (27 MBps) [2024-11-19T08:47:08.784Z] Copying: 582/1024 [MB] (28 MBps) [2024-11-19T08:47:09.720Z] Copying: 610/1024 [MB] (28 MBps) [2024-11-19T08:47:10.657Z] Copying: 638/1024 [MB] (27 MBps) [2024-11-19T08:47:11.593Z] Copying: 666/1024 [MB] (27 MBps) [2024-11-19T08:47:12.987Z] Copying: 695/1024 [MB] (28 MBps) [2024-11-19T08:47:13.570Z] Copying: 724/1024 [MB] (29 MBps) [2024-11-19T08:47:14.947Z] Copying: 752/1024 [MB] (28 MBps) [2024-11-19T08:47:15.882Z] Copying: 781/1024 [MB] (28 MBps) [2024-11-19T08:47:16.818Z] Copying: 809/1024 [MB] (27 MBps) [2024-11-19T08:47:17.754Z] Copying: 837/1024 [MB] (28 MBps) [2024-11-19T08:47:18.689Z] Copying: 867/1024 [MB] (29 MBps) [2024-11-19T08:47:19.626Z] Copying: 897/1024 [MB] (29 MBps) [2024-11-19T08:47:20.562Z] Copying: 926/1024 [MB] (29 MBps) [2024-11-19T08:47:21.940Z] Copying: 955/1024 [MB] (29 MBps) [2024-11-19T08:47:22.877Z] Copying: 984/1024 [MB] (28 MBps) [2024-11-19T08:47:23.135Z] Copying: 1011/1024 [MB] (27 MBps) [2024-11-19T08:47:23.394Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-19 08:47:23.210521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.098 [2024-11-19 08:47:23.210643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:44.098 [2024-11-19 08:47:23.210682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:44.098 [2024-11-19 08:47:23.210699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.098 [2024-11-19 08:47:23.210762] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:44.098 [2024-11-19 08:47:23.215162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.098 [2024-11-19 08:47:23.215201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:44.098 [2024-11-19 08:47:23.215220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.360 ms 00:28:44.098 [2024-11-19 08:47:23.215235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.098 [2024-11-19 08:47:23.215553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.098 [2024-11-19 08:47:23.215586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:44.098 [2024-11-19 08:47:23.215625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:28:44.098 [2024-11-19 08:47:23.215642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.098 [2024-11-19 08:47:23.227821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.098 [2024-11-19 08:47:23.227870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:44.098 [2024-11-19 08:47:23.227888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.150 ms 00:28:44.098 [2024-11-19 08:47:23.227901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.098 [2024-11-19 08:47:23.233569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.098 [2024-11-19 08:47:23.233617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:44.099 [2024-11-19 08:47:23.233638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.627 ms 00:28:44.099 [2024-11-19 08:47:23.233662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.099 [2024-11-19 08:47:23.264332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.099 [2024-11-19 08:47:23.264370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:44.099 [2024-11-19 08:47:23.264386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.598 ms 00:28:44.099 [2024-11-19 08:47:23.264398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.099 [2024-11-19 08:47:23.282692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.099 [2024-11-19 08:47:23.282757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:44.099 [2024-11-19 08:47:23.282774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.249 ms 00:28:44.099 [2024-11-19 08:47:23.282786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.099 [2024-11-19 08:47:23.284554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.099 [2024-11-19 08:47:23.284592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:44.099 [2024-11-19 08:47:23.284620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:28:44.099 [2024-11-19 08:47:23.284635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.099 [2024-11-19 08:47:23.315415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.099 [2024-11-19 08:47:23.315467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:44.099 [2024-11-19 08:47:23.315482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.749 ms 00:28:44.099 [2024-11-19 08:47:23.315493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.099 [2024-11-19 08:47:23.346981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.099 [2024-11-19 08:47:23.347032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:44.099 [2024-11-19 08:47:23.347078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.444 ms 00:28:44.099 [2024-11-19 08:47:23.347089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.099 [2024-11-19 08:47:23.375280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.099 [2024-11-19 08:47:23.375332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:44.099 [2024-11-19 08:47:23.375363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.147 ms 00:28:44.099 [2024-11-19 08:47:23.375373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.358 [2024-11-19 08:47:23.406840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.358 [2024-11-19 08:47:23.406882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:44.358 [2024-11-19 08:47:23.406898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.383 ms 00:28:44.358 [2024-11-19 08:47:23.406909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.358 [2024-11-19 08:47:23.406952] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:44.358 [2024-11-19 08:47:23.406976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:44.358 [2024-11-19 08:47:23.407019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:44.358 [2024-11-19 08:47:23.407045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:44.358 [2024-11-19 08:47:23.407055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:44.358 [2024-11-19 08:47:23.407066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:44.358 [2024-11-19 08:47:23.407083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:44.358 [2024-11-19 08:47:23.407094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:44.358 [2024-11-19 08:47:23.407104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:44.358 [2024-11-19 08:47:23.407121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:44.358 [2024-11-19 08:47:23.407132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:44.358 [2024-11-19 08:47:23.407174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:44.358 [2024-11-19 08:47:23.407185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.407990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:44.359 [2024-11-19 08:47:23.408329] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:44.359 [2024-11-19 08:47:23.408340] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68a09d1c-a153-421e-98ab-c545dfae1eab 00:28:44.360 [2024-11-19 08:47:23.408353] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:44.360 [2024-11-19 08:47:23.408363] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 170432 00:28:44.360 [2024-11-19 08:47:23.408374] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 168448 00:28:44.360 [2024-11-19 08:47:23.408391] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0118 00:28:44.360 [2024-11-19 08:47:23.408402] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:44.360 [2024-11-19 08:47:23.408414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:44.360 [2024-11-19 08:47:23.408424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:44.360 [2024-11-19 08:47:23.408446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:44.360 [2024-11-19 08:47:23.408457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:44.360 [2024-11-19 08:47:23.408482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.360 [2024-11-19 08:47:23.408493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:44.360 [2024-11-19 08:47:23.408504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:28:44.360 [2024-11-19 08:47:23.408516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.360 [2024-11-19 08:47:23.425269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.360 [2024-11-19 08:47:23.425325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:44.360 [2024-11-19 08:47:23.425355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.712 ms 00:28:44.360 [2024-11-19 08:47:23.425366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.360 [2024-11-19 08:47:23.425844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.360 [2024-11-19 08:47:23.425875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:44.360 [2024-11-19 08:47:23.425889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:28:44.360 [2024-11-19 08:47:23.425900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.360 [2024-11-19 08:47:23.466507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.360 [2024-11-19 08:47:23.466570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:44.360 [2024-11-19 08:47:23.466602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.360 [2024-11-19 08:47:23.466615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.360 [2024-11-19 08:47:23.466693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.360 [2024-11-19 08:47:23.466709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:44.360 [2024-11-19 08:47:23.466722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.360 [2024-11-19 08:47:23.466732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.360 [2024-11-19 08:47:23.466827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.360 [2024-11-19 08:47:23.466852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:44.360 [2024-11-19 08:47:23.466865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.360 [2024-11-19 08:47:23.466876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.360 [2024-11-19 08:47:23.466900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.360 [2024-11-19 08:47:23.466913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:44.360 [2024-11-19 08:47:23.466925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.360 [2024-11-19 08:47:23.466935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.360 [2024-11-19 08:47:23.568071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.360 [2024-11-19 08:47:23.568149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:44.360 [2024-11-19 08:47:23.568183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.360 [2024-11-19 08:47:23.568196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.618 [2024-11-19 08:47:23.652001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.618 [2024-11-19 08:47:23.652060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:44.618 [2024-11-19 08:47:23.652078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.618 [2024-11-19 08:47:23.652091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.619 [2024-11-19 08:47:23.652194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.619 [2024-11-19 08:47:23.652212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:44.619 [2024-11-19 08:47:23.652232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.619 [2024-11-19 08:47:23.652243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.619 [2024-11-19 08:47:23.652293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.619 [2024-11-19 08:47:23.652314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:44.619 [2024-11-19 08:47:23.652327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.619 [2024-11-19 08:47:23.652338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.619 [2024-11-19 08:47:23.652469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.619 [2024-11-19 08:47:23.652495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:44.619 [2024-11-19 08:47:23.652508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.619 [2024-11-19 08:47:23.652527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.619 [2024-11-19 08:47:23.652575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.619 [2024-11-19 08:47:23.652593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:44.619 [2024-11-19 08:47:23.652628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.619 [2024-11-19 08:47:23.652644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.619 [2024-11-19 08:47:23.652690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.619 [2024-11-19 08:47:23.652705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:44.619 [2024-11-19 08:47:23.652717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.619 [2024-11-19 08:47:23.652739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.619 [2024-11-19 08:47:23.652791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.619 [2024-11-19 08:47:23.652812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:44.619 [2024-11-19 08:47:23.652825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.619 [2024-11-19 08:47:23.652837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.619 [2024-11-19 08:47:23.652978] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 442.431 ms, result 0 00:28:45.554 00:28:45.554 00:28:45.554 08:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:47.459 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:47.459 08:47:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:47.719 [2024-11-19 08:47:26.782981] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:28:47.719 [2024-11-19 08:47:26.783157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80646 ] 00:28:47.719 [2024-11-19 08:47:26.965671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.978 [2024-11-19 08:47:27.096814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.241 [2024-11-19 08:47:27.436620] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.241 [2024-11-19 08:47:27.436695] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:48.502 [2024-11-19 08:47:27.600181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.502 [2024-11-19 08:47:27.600245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:48.502 [2024-11-19 08:47:27.600275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:48.502 [2024-11-19 08:47:27.600288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.502 [2024-11-19 08:47:27.600356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.502 [2024-11-19 08:47:27.600374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:48.502 [2024-11-19 08:47:27.600392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:48.502 [2024-11-19 08:47:27.600403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.502 [2024-11-19 08:47:27.600436] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:48.502 [2024-11-19 08:47:27.601381] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:48.502 [2024-11-19 08:47:27.601567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.502 [2024-11-19 08:47:27.601589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:48.502 [2024-11-19 08:47:27.601633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:28:48.502 [2024-11-19 08:47:27.601651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.502 [2024-11-19 08:47:27.602807] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:48.502 [2024-11-19 08:47:27.619844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.502 [2024-11-19 08:47:27.620020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:48.502 [2024-11-19 08:47:27.620050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.038 ms 00:28:48.502 [2024-11-19 08:47:27.620071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.502 [2024-11-19 08:47:27.620151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.502 [2024-11-19 08:47:27.620172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:48.502 [2024-11-19 08:47:27.620195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:48.502 [2024-11-19 08:47:27.620210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.502 [2024-11-19 08:47:27.624778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.502 [2024-11-19 08:47:27.624824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:48.502 [2024-11-19 08:47:27.624842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.465 ms 00:28:48.502 [2024-11-19 08:47:27.624861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.502 [2024-11-19 08:47:27.624958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.502 [2024-11-19 08:47:27.624976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:48.502 [2024-11-19 08:47:27.624989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:48.502 [2024-11-19 08:47:27.625001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.502 [2024-11-19 08:47:27.625087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.502 [2024-11-19 08:47:27.625104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:48.502 [2024-11-19 08:47:27.625118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:48.502 [2024-11-19 08:47:27.625128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.502 [2024-11-19 08:47:27.625167] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:48.502 [2024-11-19 08:47:27.629448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.502 [2024-11-19 08:47:27.629489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:48.502 [2024-11-19 08:47:27.629510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.289 ms 00:28:48.502 [2024-11-19 08:47:27.629522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.502 [2024-11-19 08:47:27.629568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.502 [2024-11-19 08:47:27.629585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:48.502 [2024-11-19 08:47:27.629598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:48.502 [2024-11-19 08:47:27.629631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.502 [2024-11-19 08:47:27.629681] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:48.502 [2024-11-19 08:47:27.629713] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:48.502 [2024-11-19 08:47:27.629756] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:48.502 [2024-11-19 08:47:27.629780] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:48.502 [2024-11-19 08:47:27.629900] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:48.502 [2024-11-19 08:47:27.629919] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:48.502 [2024-11-19 08:47:27.629934] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:48.502 [2024-11-19 08:47:27.629950] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:48.502 [2024-11-19 08:47:27.629979] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:48.502 [2024-11-19 08:47:27.630007] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:48.502 [2024-11-19 08:47:27.630019] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:48.503 [2024-11-19 08:47:27.630030] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:48.503 [2024-11-19 08:47:27.630047] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:48.503 [2024-11-19 08:47:27.630060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.503 [2024-11-19 08:47:27.630072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:48.503 [2024-11-19 08:47:27.630085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:28:48.503 [2024-11-19 08:47:27.630096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.503 [2024-11-19 08:47:27.630199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.503 [2024-11-19 08:47:27.630214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:48.503 [2024-11-19 08:47:27.630227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:48.503 [2024-11-19 08:47:27.630239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.503 [2024-11-19 08:47:27.630386] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:48.503 [2024-11-19 08:47:27.630409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:48.503 [2024-11-19 08:47:27.630422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.503 [2024-11-19 08:47:27.630435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.503 [2024-11-19 08:47:27.630447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:48.503 [2024-11-19 08:47:27.630458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:48.503 [2024-11-19 08:47:27.630469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:48.503 [2024-11-19 08:47:27.630482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:48.503 [2024-11-19 08:47:27.630494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:48.503 [2024-11-19 08:47:27.630505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.503 [2024-11-19 08:47:27.630515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:48.503 [2024-11-19 08:47:27.630526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:48.503 [2024-11-19 08:47:27.630537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:48.503 [2024-11-19 08:47:27.630548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:48.503 [2024-11-19 08:47:27.630561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:48.503 [2024-11-19 08:47:27.630584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.503 [2024-11-19 08:47:27.630595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:48.503 [2024-11-19 08:47:27.630607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:48.503 [2024-11-19 08:47:27.630617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.503 [2024-11-19 08:47:27.630629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:48.503 [2024-11-19 08:47:27.630659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:48.503 [2024-11-19 08:47:27.630672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.503 [2024-11-19 08:47:27.630683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:48.503 [2024-11-19 08:47:27.630694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:48.503 [2024-11-19 08:47:27.630705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.503 [2024-11-19 08:47:27.630716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:48.503 [2024-11-19 08:47:27.630733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:48.503 [2024-11-19 08:47:27.630744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.503 [2024-11-19 08:47:27.630754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:48.503 [2024-11-19 08:47:27.630766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:48.503 [2024-11-19 08:47:27.630777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:48.503 [2024-11-19 08:47:27.630787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:48.503 [2024-11-19 08:47:27.630798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:48.503 [2024-11-19 08:47:27.630809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.503 [2024-11-19 08:47:27.630820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:48.503 [2024-11-19 08:47:27.630831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:48.503 [2024-11-19 08:47:27.630842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:48.503 [2024-11-19 08:47:27.630861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:48.503 [2024-11-19 08:47:27.630874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:48.503 [2024-11-19 08:47:27.630885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.503 [2024-11-19 08:47:27.630896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:48.503 [2024-11-19 08:47:27.630906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:48.503 [2024-11-19 08:47:27.630917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.503 [2024-11-19 08:47:27.630928] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:48.503 [2024-11-19 08:47:27.630940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:48.503 [2024-11-19 08:47:27.630951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:48.503 [2024-11-19 08:47:27.630964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:48.503 [2024-11-19 08:47:27.630976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:48.503 [2024-11-19 08:47:27.630988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:48.503 [2024-11-19 08:47:27.630998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:48.503 [2024-11-19 08:47:27.631010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:48.503 [2024-11-19 08:47:27.631020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:48.503 [2024-11-19 08:47:27.631032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:48.503 [2024-11-19 08:47:27.631044] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:48.503 [2024-11-19 08:47:27.631059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.503 [2024-11-19 08:47:27.631078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:48.503 [2024-11-19 08:47:27.631090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:48.503 [2024-11-19 08:47:27.631102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:48.503 [2024-11-19 08:47:27.631114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:48.503 [2024-11-19 08:47:27.631125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:48.503 [2024-11-19 08:47:27.631137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:48.503 [2024-11-19 08:47:27.631149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:48.503 [2024-11-19 08:47:27.631161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:48.503 [2024-11-19 08:47:27.631172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:48.503 [2024-11-19 08:47:27.631184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:48.503 [2024-11-19 08:47:27.631196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:48.503 [2024-11-19 08:47:27.631208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:48.503 [2024-11-19 08:47:27.631219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:48.503 [2024-11-19 08:47:27.631231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:48.503 [2024-11-19 08:47:27.631243] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:48.503 [2024-11-19 08:47:27.631256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:48.503 [2024-11-19 08:47:27.631270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:48.503 [2024-11-19 08:47:27.631282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:48.503 [2024-11-19 08:47:27.631294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:48.503 [2024-11-19 08:47:27.631306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:48.503 [2024-11-19 08:47:27.631319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.503 [2024-11-19 08:47:27.631331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:48.503 [2024-11-19 08:47:27.631343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:28:48.503 [2024-11-19 08:47:27.631360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.503 [2024-11-19 08:47:27.667489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.503 [2024-11-19 08:47:27.667551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:48.503 [2024-11-19 08:47:27.667573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.065 ms 00:28:48.503 [2024-11-19 08:47:27.667591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.503 [2024-11-19 08:47:27.667762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.503 [2024-11-19 08:47:27.667783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:48.503 [2024-11-19 08:47:27.667798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:28:48.503 [2024-11-19 08:47:27.667810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.503 [2024-11-19 08:47:27.722454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.503 [2024-11-19 08:47:27.722513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:48.504 [2024-11-19 08:47:27.722534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.542 ms 00:28:48.504 [2024-11-19 08:47:27.722547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.504 [2024-11-19 08:47:27.722639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.504 [2024-11-19 08:47:27.722660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:48.504 [2024-11-19 08:47:27.722681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:48.504 [2024-11-19 08:47:27.722693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.504 [2024-11-19 08:47:27.723096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.504 [2024-11-19 08:47:27.723124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:48.504 [2024-11-19 08:47:27.723139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:28:48.504 [2024-11-19 08:47:27.723151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.504 [2024-11-19 08:47:27.723321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.504 [2024-11-19 08:47:27.723342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:48.504 [2024-11-19 08:47:27.723363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:28:48.504 [2024-11-19 08:47:27.723375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.504 [2024-11-19 08:47:27.741726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.504 [2024-11-19 08:47:27.741940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:48.504 [2024-11-19 08:47:27.741970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.322 ms 00:28:48.504 [2024-11-19 08:47:27.741984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.504 [2024-11-19 08:47:27.758742] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:48.504 [2024-11-19 08:47:27.758803] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:48.504 [2024-11-19 08:47:27.758825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.504 [2024-11-19 08:47:27.758838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:48.504 [2024-11-19 08:47:27.758851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.699 ms 00:28:48.504 [2024-11-19 08:47:27.758864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.504 [2024-11-19 08:47:27.790638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.504 [2024-11-19 08:47:27.790713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:48.504 [2024-11-19 08:47:27.790734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.726 ms 00:28:48.504 [2024-11-19 08:47:27.790747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.762 [2024-11-19 08:47:27.808001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.762 [2024-11-19 08:47:27.808053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:48.762 [2024-11-19 08:47:27.808072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.199 ms 00:28:48.762 [2024-11-19 08:47:27.808084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.762 [2024-11-19 08:47:27.825254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.762 [2024-11-19 08:47:27.825298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:48.762 [2024-11-19 08:47:27.825337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.124 ms 00:28:48.762 [2024-11-19 08:47:27.825355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.762 [2024-11-19 08:47:27.826194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.762 [2024-11-19 08:47:27.826233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:48.762 [2024-11-19 08:47:27.826255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:28:48.762 [2024-11-19 08:47:27.826266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.762 [2024-11-19 08:47:27.903511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.762 [2024-11-19 08:47:27.903588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:48.762 [2024-11-19 08:47:27.903634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.217 ms 00:28:48.762 [2024-11-19 08:47:27.903649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.762 [2024-11-19 08:47:27.917226] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:48.762 [2024-11-19 08:47:27.920059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.762 [2024-11-19 08:47:27.920099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:48.762 [2024-11-19 08:47:27.920118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.326 ms 00:28:48.762 [2024-11-19 08:47:27.920131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.762 [2024-11-19 08:47:27.920254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.762 [2024-11-19 08:47:27.920276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:48.762 [2024-11-19 08:47:27.920290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:48.762 [2024-11-19 08:47:27.920307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.762 [2024-11-19 08:47:27.921110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.762 [2024-11-19 08:47:27.921241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:48.762 [2024-11-19 08:47:27.921363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:28:48.762 [2024-11-19 08:47:27.921417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.762 [2024-11-19 08:47:27.921553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.762 [2024-11-19 08:47:27.921627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:48.762 [2024-11-19 08:47:27.921680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:48.762 [2024-11-19 08:47:27.921697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.762 [2024-11-19 08:47:27.921753] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:48.762 [2024-11-19 08:47:27.921773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.762 [2024-11-19 08:47:27.921785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:48.762 [2024-11-19 08:47:27.921798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:48.762 [2024-11-19 08:47:27.921811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.762 [2024-11-19 08:47:27.953912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.762 [2024-11-19 08:47:27.953967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:48.762 [2024-11-19 08:47:27.953994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.070 ms 00:28:48.762 [2024-11-19 08:47:27.954007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.762 [2024-11-19 08:47:27.954111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.762 [2024-11-19 08:47:27.954146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:48.762 [2024-11-19 08:47:27.954159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:48.762 [2024-11-19 08:47:27.954171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.762 [2024-11-19 08:47:27.955332] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 354.620 ms, result 0 00:28:50.140  [2024-11-19T08:47:30.372Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-19T08:47:31.309Z] Copying: 51/1024 [MB] (25 MBps) [2024-11-19T08:47:32.245Z] Copying: 77/1024 [MB] (26 MBps) [2024-11-19T08:47:33.180Z] Copying: 104/1024 [MB] (26 MBps) [2024-11-19T08:47:34.556Z] Copying: 129/1024 [MB] (25 MBps) [2024-11-19T08:47:35.491Z] Copying: 155/1024 [MB] (25 MBps) [2024-11-19T08:47:36.427Z] Copying: 181/1024 [MB] (26 MBps) [2024-11-19T08:47:37.364Z] Copying: 206/1024 [MB] (25 MBps) [2024-11-19T08:47:38.301Z] Copying: 230/1024 [MB] (24 MBps) [2024-11-19T08:47:39.239Z] Copying: 253/1024 [MB] (22 MBps) [2024-11-19T08:47:40.616Z] Copying: 275/1024 [MB] (22 MBps) [2024-11-19T08:47:41.184Z] Copying: 297/1024 [MB] (22 MBps) [2024-11-19T08:47:42.563Z] Copying: 319/1024 [MB] (22 MBps) [2024-11-19T08:47:43.499Z] Copying: 342/1024 [MB] (23 MBps) [2024-11-19T08:47:44.436Z] Copying: 365/1024 [MB] (22 MBps) [2024-11-19T08:47:45.389Z] Copying: 388/1024 [MB] (22 MBps) [2024-11-19T08:47:46.325Z] Copying: 411/1024 [MB] (22 MBps) [2024-11-19T08:47:47.261Z] Copying: 434/1024 [MB] (22 MBps) [2024-11-19T08:47:48.198Z] Copying: 457/1024 [MB] (22 MBps) [2024-11-19T08:47:49.575Z] Copying: 479/1024 [MB] (22 MBps) [2024-11-19T08:47:50.511Z] Copying: 502/1024 [MB] (23 MBps) [2024-11-19T08:47:51.549Z] Copying: 525/1024 [MB] (23 MBps) [2024-11-19T08:47:52.485Z] Copying: 549/1024 [MB] (23 MBps) [2024-11-19T08:47:53.422Z] Copying: 571/1024 [MB] (22 MBps) [2024-11-19T08:47:54.358Z] Copying: 594/1024 [MB] (23 MBps) [2024-11-19T08:47:55.295Z] Copying: 618/1024 [MB] (23 MBps) [2024-11-19T08:47:56.231Z] Copying: 641/1024 [MB] (22 MBps) [2024-11-19T08:47:57.611Z] Copying: 663/1024 [MB] (22 MBps) [2024-11-19T08:47:58.179Z] Copying: 686/1024 [MB] (22 MBps) [2024-11-19T08:47:59.555Z] Copying: 709/1024 [MB] (22 MBps) [2024-11-19T08:48:00.492Z] Copying: 732/1024 [MB] (23 MBps) [2024-11-19T08:48:01.428Z] Copying: 756/1024 [MB] (23 MBps) [2024-11-19T08:48:02.361Z] Copying: 781/1024 [MB] (24 MBps) [2024-11-19T08:48:03.298Z] Copying: 805/1024 [MB] (24 MBps) [2024-11-19T08:48:04.235Z] Copying: 829/1024 [MB] (23 MBps) [2024-11-19T08:48:05.611Z] Copying: 852/1024 [MB] (23 MBps) [2024-11-19T08:48:06.178Z] Copying: 876/1024 [MB] (23 MBps) [2024-11-19T08:48:07.555Z] Copying: 899/1024 [MB] (23 MBps) [2024-11-19T08:48:08.491Z] Copying: 922/1024 [MB] (23 MBps) [2024-11-19T08:48:09.425Z] Copying: 945/1024 [MB] (23 MBps) [2024-11-19T08:48:10.362Z] Copying: 969/1024 [MB] (23 MBps) [2024-11-19T08:48:11.298Z] Copying: 992/1024 [MB] (22 MBps) [2024-11-19T08:48:11.868Z] Copying: 1014/1024 [MB] (22 MBps) [2024-11-19T08:48:11.868Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-19 08:48:11.612835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.572 [2024-11-19 08:48:11.612927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:32.572 [2024-11-19 08:48:11.612964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:32.572 [2024-11-19 08:48:11.612977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.572 [2024-11-19 08:48:11.613022] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:32.572 [2024-11-19 08:48:11.617390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.572 [2024-11-19 08:48:11.617428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:32.572 [2024-11-19 08:48:11.617466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.346 ms 00:29:32.572 [2024-11-19 08:48:11.617478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.572 [2024-11-19 08:48:11.617732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.572 [2024-11-19 08:48:11.617751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:32.572 [2024-11-19 08:48:11.617763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:29:32.572 [2024-11-19 08:48:11.617773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.572 [2024-11-19 08:48:11.621078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.572 [2024-11-19 08:48:11.621108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:32.572 [2024-11-19 08:48:11.621138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.287 ms 00:29:32.572 [2024-11-19 08:48:11.621171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.572 [2024-11-19 08:48:11.627216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.572 [2024-11-19 08:48:11.627411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:32.572 [2024-11-19 08:48:11.627455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.024 ms 00:29:32.572 [2024-11-19 08:48:11.627467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.572 [2024-11-19 08:48:11.655881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.572 [2024-11-19 08:48:11.656129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:32.572 [2024-11-19 08:48:11.656168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.339 ms 00:29:32.572 [2024-11-19 08:48:11.656181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.572 [2024-11-19 08:48:11.672530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.572 [2024-11-19 08:48:11.672571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:32.572 [2024-11-19 08:48:11.672604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.304 ms 00:29:32.572 [2024-11-19 08:48:11.672615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.572 [2024-11-19 08:48:11.674587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.572 [2024-11-19 08:48:11.674646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:32.572 [2024-11-19 08:48:11.674679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:29:32.572 [2024-11-19 08:48:11.674690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.572 [2024-11-19 08:48:11.703259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.572 [2024-11-19 08:48:11.703307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:32.572 [2024-11-19 08:48:11.703339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.546 ms 00:29:32.572 [2024-11-19 08:48:11.703349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.572 [2024-11-19 08:48:11.733557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.572 [2024-11-19 08:48:11.733678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:32.572 [2024-11-19 08:48:11.733713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.167 ms 00:29:32.572 [2024-11-19 08:48:11.733723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.572 [2024-11-19 08:48:11.762383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.572 [2024-11-19 08:48:11.762423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:32.572 [2024-11-19 08:48:11.762455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.618 ms 00:29:32.572 [2024-11-19 08:48:11.762465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.572 [2024-11-19 08:48:11.790314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.572 [2024-11-19 08:48:11.790353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:32.572 [2024-11-19 08:48:11.790386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.755 ms 00:29:32.572 [2024-11-19 08:48:11.790396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.572 [2024-11-19 08:48:11.790435] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:32.572 [2024-11-19 08:48:11.790464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:32.572 [2024-11-19 08:48:11.790481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:32.572 [2024-11-19 08:48:11.790492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:32.572 [2024-11-19 08:48:11.790503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:32.572 [2024-11-19 08:48:11.790513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:32.572 [2024-11-19 08:48:11.790524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:32.572 [2024-11-19 08:48:11.790549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:32.572 [2024-11-19 08:48:11.790560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.790995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:32.573 [2024-11-19 08:48:11.791582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:32.574 [2024-11-19 08:48:11.791592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:32.574 [2024-11-19 08:48:11.791602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:32.574 [2024-11-19 08:48:11.791613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:32.574 [2024-11-19 08:48:11.791630] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:32.574 [2024-11-19 08:48:11.791642] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 68a09d1c-a153-421e-98ab-c545dfae1eab 00:29:32.574 [2024-11-19 08:48:11.791673] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:32.574 [2024-11-19 08:48:11.791687] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:32.574 [2024-11-19 08:48:11.791696] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:32.574 [2024-11-19 08:48:11.791706] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:32.574 [2024-11-19 08:48:11.791716] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:32.574 [2024-11-19 08:48:11.791733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:32.574 [2024-11-19 08:48:11.791784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:32.574 [2024-11-19 08:48:11.791803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:32.574 [2024-11-19 08:48:11.791819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:32.574 [2024-11-19 08:48:11.791832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.574 [2024-11-19 08:48:11.791844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:32.574 [2024-11-19 08:48:11.791857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:29:32.574 [2024-11-19 08:48:11.791875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.574 [2024-11-19 08:48:11.807227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.574 [2024-11-19 08:48:11.807263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:32.574 [2024-11-19 08:48:11.807296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.309 ms 00:29:32.574 [2024-11-19 08:48:11.807307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.574 [2024-11-19 08:48:11.807823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:32.574 [2024-11-19 08:48:11.807853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:32.574 [2024-11-19 08:48:11.807866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:29:32.574 [2024-11-19 08:48:11.807878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.574 [2024-11-19 08:48:11.845963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.574 [2024-11-19 08:48:11.846004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:32.574 [2024-11-19 08:48:11.846038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.574 [2024-11-19 08:48:11.846048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.574 [2024-11-19 08:48:11.846103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.574 [2024-11-19 08:48:11.846124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:32.574 [2024-11-19 08:48:11.846136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.574 [2024-11-19 08:48:11.846161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.574 [2024-11-19 08:48:11.846259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.574 [2024-11-19 08:48:11.846277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:32.574 [2024-11-19 08:48:11.846289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.574 [2024-11-19 08:48:11.846299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.574 [2024-11-19 08:48:11.846336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.574 [2024-11-19 08:48:11.846349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:32.574 [2024-11-19 08:48:11.846367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.574 [2024-11-19 08:48:11.846377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.833 [2024-11-19 08:48:11.937516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.833 [2024-11-19 08:48:11.937597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:32.833 [2024-11-19 08:48:11.937677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.833 [2024-11-19 08:48:11.937690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.833 [2024-11-19 08:48:12.018983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.833 [2024-11-19 08:48:12.019046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:32.833 [2024-11-19 08:48:12.019071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.833 [2024-11-19 08:48:12.019082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.833 [2024-11-19 08:48:12.019174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.833 [2024-11-19 08:48:12.019191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:32.833 [2024-11-19 08:48:12.019202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.833 [2024-11-19 08:48:12.019212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.833 [2024-11-19 08:48:12.019289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.833 [2024-11-19 08:48:12.019305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:32.833 [2024-11-19 08:48:12.019316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.833 [2024-11-19 08:48:12.019332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.833 [2024-11-19 08:48:12.019442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.833 [2024-11-19 08:48:12.019462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:32.834 [2024-11-19 08:48:12.019474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.834 [2024-11-19 08:48:12.019499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.834 [2024-11-19 08:48:12.019552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.834 [2024-11-19 08:48:12.019569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:32.834 [2024-11-19 08:48:12.019580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.834 [2024-11-19 08:48:12.019590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.834 [2024-11-19 08:48:12.019636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.834 [2024-11-19 08:48:12.019690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:32.834 [2024-11-19 08:48:12.019723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.834 [2024-11-19 08:48:12.019766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.834 [2024-11-19 08:48:12.019857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:32.834 [2024-11-19 08:48:12.019876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:32.834 [2024-11-19 08:48:12.019888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:32.834 [2024-11-19 08:48:12.019907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:32.834 [2024-11-19 08:48:12.020055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 407.177 ms, result 0 00:29:33.771 00:29:33.771 00:29:33.771 08:48:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:35.675 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:35.675 08:48:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:35.675 08:48:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:35.676 08:48:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:35.676 08:48:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:35.935 08:48:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:35.935 08:48:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:35.935 08:48:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:35.935 Process with pid 78703 is not found 00:29:35.935 08:48:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78703 00:29:35.935 08:48:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 78703 ']' 00:29:35.935 08:48:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 78703 00:29:35.935 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78703) - No such process 00:29:35.935 08:48:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 78703 is not found' 00:29:35.935 08:48:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:36.195 Remove shared memory files 00:29:36.195 08:48:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:36.195 08:48:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:36.195 08:48:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:36.195 08:48:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:36.195 08:48:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:36.195 08:48:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:36.195 08:48:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:36.195 ************************************ 00:29:36.195 END TEST ftl_dirty_shutdown 00:29:36.195 ************************************ 00:29:36.195 00:29:36.195 real 3m58.830s 00:29:36.195 user 4m37.014s 00:29:36.195 sys 0m36.311s 00:29:36.195 08:48:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:36.195 08:48:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:36.195 08:48:15 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:36.195 08:48:15 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:36.195 08:48:15 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:36.195 08:48:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:36.195 ************************************ 00:29:36.195 START TEST ftl_upgrade_shutdown 00:29:36.195 ************************************ 00:29:36.195 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:36.455 * Looking for test storage... 00:29:36.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:36.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.455 --rc genhtml_branch_coverage=1 00:29:36.455 --rc genhtml_function_coverage=1 00:29:36.455 --rc genhtml_legend=1 00:29:36.455 --rc geninfo_all_blocks=1 00:29:36.455 --rc geninfo_unexecuted_blocks=1 00:29:36.455 00:29:36.455 ' 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:36.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.455 --rc genhtml_branch_coverage=1 00:29:36.455 --rc genhtml_function_coverage=1 00:29:36.455 --rc genhtml_legend=1 00:29:36.455 --rc geninfo_all_blocks=1 00:29:36.455 --rc geninfo_unexecuted_blocks=1 00:29:36.455 00:29:36.455 ' 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:36.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.455 --rc genhtml_branch_coverage=1 00:29:36.455 --rc genhtml_function_coverage=1 00:29:36.455 --rc genhtml_legend=1 00:29:36.455 --rc geninfo_all_blocks=1 00:29:36.455 --rc geninfo_unexecuted_blocks=1 00:29:36.455 00:29:36.455 ' 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:36.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.455 --rc genhtml_branch_coverage=1 00:29:36.455 --rc genhtml_function_coverage=1 00:29:36.455 --rc genhtml_legend=1 00:29:36.455 --rc geninfo_all_blocks=1 00:29:36.455 --rc geninfo_unexecuted_blocks=1 00:29:36.455 00:29:36.455 ' 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:36.455 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81192 00:29:36.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81192 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81192 ']' 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.456 08:48:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:36.714 [2024-11-19 08:48:15.829825] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:36.714 [2024-11-19 08:48:15.830226] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81192 ] 00:29:36.974 [2024-11-19 08:48:16.019235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.974 [2024-11-19 08:48:16.144848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:37.911 08:48:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:38.171 08:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:38.171 08:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:38.171 08:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:38.171 08:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:38.171 08:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:38.171 08:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:38.171 08:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:38.171 08:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:38.430 08:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:38.430 { 00:29:38.430 "name": "basen1", 00:29:38.430 "aliases": [ 00:29:38.430 "9d31bf73-2299-4016-af91-ecc221be22a3" 00:29:38.430 ], 00:29:38.430 "product_name": "NVMe disk", 00:29:38.430 "block_size": 4096, 00:29:38.430 "num_blocks": 1310720, 00:29:38.430 "uuid": "9d31bf73-2299-4016-af91-ecc221be22a3", 00:29:38.430 "numa_id": -1, 00:29:38.430 "assigned_rate_limits": { 00:29:38.430 "rw_ios_per_sec": 0, 00:29:38.430 "rw_mbytes_per_sec": 0, 00:29:38.430 "r_mbytes_per_sec": 0, 00:29:38.430 "w_mbytes_per_sec": 0 00:29:38.430 }, 00:29:38.430 "claimed": true, 00:29:38.430 "claim_type": "read_many_write_one", 00:29:38.430 "zoned": false, 00:29:38.430 "supported_io_types": { 00:29:38.430 "read": true, 00:29:38.430 "write": true, 00:29:38.430 "unmap": true, 00:29:38.430 "flush": true, 00:29:38.430 "reset": true, 00:29:38.430 "nvme_admin": true, 00:29:38.430 "nvme_io": true, 00:29:38.430 "nvme_io_md": false, 00:29:38.430 "write_zeroes": true, 00:29:38.430 "zcopy": false, 00:29:38.430 "get_zone_info": false, 00:29:38.430 "zone_management": false, 00:29:38.430 "zone_append": false, 00:29:38.430 "compare": true, 00:29:38.430 "compare_and_write": false, 00:29:38.430 "abort": true, 00:29:38.430 "seek_hole": false, 00:29:38.430 "seek_data": false, 00:29:38.430 "copy": true, 00:29:38.430 "nvme_iov_md": false 00:29:38.430 }, 00:29:38.430 "driver_specific": { 00:29:38.431 "nvme": [ 00:29:38.431 { 00:29:38.431 "pci_address": "0000:00:11.0", 00:29:38.431 "trid": { 00:29:38.431 "trtype": "PCIe", 00:29:38.431 "traddr": "0000:00:11.0" 00:29:38.431 }, 00:29:38.431 "ctrlr_data": { 00:29:38.431 "cntlid": 0, 00:29:38.431 "vendor_id": "0x1b36", 00:29:38.431 "model_number": "QEMU NVMe Ctrl", 00:29:38.431 "serial_number": "12341", 00:29:38.431 "firmware_revision": "8.0.0", 00:29:38.431 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:38.431 "oacs": { 00:29:38.431 "security": 0, 00:29:38.431 "format": 1, 00:29:38.431 "firmware": 0, 00:29:38.431 "ns_manage": 1 00:29:38.431 }, 00:29:38.431 "multi_ctrlr": false, 00:29:38.431 "ana_reporting": false 00:29:38.431 }, 00:29:38.431 "vs": { 00:29:38.431 "nvme_version": "1.4" 00:29:38.431 }, 00:29:38.431 "ns_data": { 00:29:38.431 "id": 1, 00:29:38.431 "can_share": false 00:29:38.431 } 00:29:38.431 } 00:29:38.431 ], 00:29:38.431 "mp_policy": "active_passive" 00:29:38.431 } 00:29:38.431 } 00:29:38.431 ]' 00:29:38.431 08:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:38.431 08:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:38.431 08:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:38.431 08:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:38.431 08:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:38.431 08:48:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:38.431 08:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:38.431 08:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:38.431 08:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:38.431 08:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:38.431 08:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:38.690 08:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=32791398-951d-4c55-9b9f-52c8bae0fe21 00:29:38.690 08:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:38.690 08:48:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32791398-951d-4c55-9b9f-52c8bae0fe21 00:29:38.950 08:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:39.242 08:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=68c34f75-ba66-44c8-bbd3-00b23bd8b04b 00:29:39.242 08:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 68c34f75-ba66-44c8-bbd3-00b23bd8b04b 00:29:39.536 08:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=d433d8f6-150e-4aa8-8687-99c739675ef3 00:29:39.536 08:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z d433d8f6-150e-4aa8-8687-99c739675ef3 ]] 00:29:39.536 08:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 d433d8f6-150e-4aa8-8687-99c739675ef3 5120 00:29:39.536 08:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:39.536 08:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:39.536 08:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=d433d8f6-150e-4aa8-8687-99c739675ef3 00:29:39.536 08:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:39.536 08:48:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size d433d8f6-150e-4aa8-8687-99c739675ef3 00:29:39.536 08:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d433d8f6-150e-4aa8-8687-99c739675ef3 00:29:39.536 08:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:39.536 08:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:39.536 08:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:39.536 08:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d433d8f6-150e-4aa8-8687-99c739675ef3 00:29:39.820 08:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:39.820 { 00:29:39.820 "name": "d433d8f6-150e-4aa8-8687-99c739675ef3", 00:29:39.820 "aliases": [ 00:29:39.820 "lvs/basen1p0" 00:29:39.820 ], 00:29:39.820 "product_name": "Logical Volume", 00:29:39.820 "block_size": 4096, 00:29:39.820 "num_blocks": 5242880, 00:29:39.820 "uuid": "d433d8f6-150e-4aa8-8687-99c739675ef3", 00:29:39.820 "assigned_rate_limits": { 00:29:39.820 "rw_ios_per_sec": 0, 00:29:39.820 "rw_mbytes_per_sec": 0, 00:29:39.820 "r_mbytes_per_sec": 0, 00:29:39.820 "w_mbytes_per_sec": 0 00:29:39.820 }, 00:29:39.820 "claimed": false, 00:29:39.820 "zoned": false, 00:29:39.820 "supported_io_types": { 00:29:39.820 "read": true, 00:29:39.820 "write": true, 00:29:39.820 "unmap": true, 00:29:39.820 "flush": false, 00:29:39.820 "reset": true, 00:29:39.820 "nvme_admin": false, 00:29:39.820 "nvme_io": false, 00:29:39.820 "nvme_io_md": false, 00:29:39.820 "write_zeroes": true, 00:29:39.820 "zcopy": false, 00:29:39.820 "get_zone_info": false, 00:29:39.820 "zone_management": false, 00:29:39.820 "zone_append": false, 00:29:39.820 "compare": false, 00:29:39.820 "compare_and_write": false, 00:29:39.820 "abort": false, 00:29:39.820 "seek_hole": true, 00:29:39.820 "seek_data": true, 00:29:39.820 "copy": false, 00:29:39.820 "nvme_iov_md": false 00:29:39.820 }, 00:29:39.820 "driver_specific": { 00:29:39.820 "lvol": { 00:29:39.820 "lvol_store_uuid": "68c34f75-ba66-44c8-bbd3-00b23bd8b04b", 00:29:39.820 "base_bdev": "basen1", 00:29:39.820 "thin_provision": true, 00:29:39.820 "num_allocated_clusters": 0, 00:29:39.820 "snapshot": false, 00:29:39.820 "clone": false, 00:29:39.820 "esnap_clone": false 00:29:39.820 } 00:29:39.820 } 00:29:39.820 } 00:29:39.820 ]' 00:29:39.820 08:48:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:39.820 08:48:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:39.820 08:48:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:39.820 08:48:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:39.820 08:48:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:39.820 08:48:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:39.820 08:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:39.820 08:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:39.820 08:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:40.077 08:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:40.077 08:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:40.077 08:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:40.336 08:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:40.336 08:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:40.336 08:48:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d d433d8f6-150e-4aa8-8687-99c739675ef3 -c cachen1p0 --l2p_dram_limit 2 00:29:40.596 [2024-11-19 08:48:19.812908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.596 [2024-11-19 08:48:19.812965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:40.596 [2024-11-19 08:48:19.813008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:40.596 [2024-11-19 08:48:19.813020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.596 [2024-11-19 08:48:19.813092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.596 [2024-11-19 08:48:19.813109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:40.596 [2024-11-19 08:48:19.813123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:29:40.596 [2024-11-19 08:48:19.813134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.596 [2024-11-19 08:48:19.813180] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:40.596 [2024-11-19 08:48:19.814239] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:40.596 [2024-11-19 08:48:19.814306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.596 [2024-11-19 08:48:19.814323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:40.596 [2024-11-19 08:48:19.814339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.112 ms 00:29:40.596 [2024-11-19 08:48:19.814351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.596 [2024-11-19 08:48:19.814530] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID dbcbb735-53c4-4435-8bd6-6f167c6b6c26 00:29:40.596 [2024-11-19 08:48:19.815667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.596 [2024-11-19 08:48:19.815711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:40.596 [2024-11-19 08:48:19.815729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:40.596 [2024-11-19 08:48:19.815772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.596 [2024-11-19 08:48:19.820724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.596 [2024-11-19 08:48:19.820789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:40.596 [2024-11-19 08:48:19.820807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.893 ms 00:29:40.596 [2024-11-19 08:48:19.820821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.596 [2024-11-19 08:48:19.820878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.596 [2024-11-19 08:48:19.820898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:40.596 [2024-11-19 08:48:19.820911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:40.596 [2024-11-19 08:48:19.820925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.596 [2024-11-19 08:48:19.820993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.596 [2024-11-19 08:48:19.821015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:40.596 [2024-11-19 08:48:19.821028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:40.596 [2024-11-19 08:48:19.821046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.596 [2024-11-19 08:48:19.821077] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:40.596 [2024-11-19 08:48:19.825468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.596 [2024-11-19 08:48:19.825508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:40.596 [2024-11-19 08:48:19.825546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.397 ms 00:29:40.596 [2024-11-19 08:48:19.825558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.596 [2024-11-19 08:48:19.825594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.596 [2024-11-19 08:48:19.825609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:40.596 [2024-11-19 08:48:19.825641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:40.596 [2024-11-19 08:48:19.825674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.596 [2024-11-19 08:48:19.825748] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:40.596 [2024-11-19 08:48:19.825901] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:40.596 [2024-11-19 08:48:19.825924] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:40.596 [2024-11-19 08:48:19.825940] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:40.597 [2024-11-19 08:48:19.825956] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:40.597 [2024-11-19 08:48:19.825985] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:40.597 [2024-11-19 08:48:19.826016] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:40.597 [2024-11-19 08:48:19.826028] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:40.597 [2024-11-19 08:48:19.826059] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:40.597 [2024-11-19 08:48:19.826070] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:40.597 [2024-11-19 08:48:19.826085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.597 [2024-11-19 08:48:19.826098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:40.597 [2024-11-19 08:48:19.826112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.357 ms 00:29:40.597 [2024-11-19 08:48:19.826123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.597 [2024-11-19 08:48:19.826219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.597 [2024-11-19 08:48:19.826246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:40.597 [2024-11-19 08:48:19.826264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:29:40.597 [2024-11-19 08:48:19.826287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.597 [2024-11-19 08:48:19.826427] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:40.597 [2024-11-19 08:48:19.826446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:40.597 [2024-11-19 08:48:19.826461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:40.597 [2024-11-19 08:48:19.826474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.597 [2024-11-19 08:48:19.826489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:40.597 [2024-11-19 08:48:19.826500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:40.597 [2024-11-19 08:48:19.826514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:40.597 [2024-11-19 08:48:19.826525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:40.597 [2024-11-19 08:48:19.826538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:40.597 [2024-11-19 08:48:19.826549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.597 [2024-11-19 08:48:19.826562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:40.597 [2024-11-19 08:48:19.826573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:40.597 [2024-11-19 08:48:19.826586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.597 [2024-11-19 08:48:19.826598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:40.597 [2024-11-19 08:48:19.826611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:40.597 [2024-11-19 08:48:19.826638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.597 [2024-11-19 08:48:19.826658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:40.597 [2024-11-19 08:48:19.826671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:40.597 [2024-11-19 08:48:19.826686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.597 [2024-11-19 08:48:19.826697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:40.597 [2024-11-19 08:48:19.826711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:40.597 [2024-11-19 08:48:19.826722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:40.597 [2024-11-19 08:48:19.826735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:40.597 [2024-11-19 08:48:19.826746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:40.597 [2024-11-19 08:48:19.826759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:40.597 [2024-11-19 08:48:19.826770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:40.597 [2024-11-19 08:48:19.826783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:40.597 [2024-11-19 08:48:19.826794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:40.597 [2024-11-19 08:48:19.826806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:40.597 [2024-11-19 08:48:19.826817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:40.597 [2024-11-19 08:48:19.826830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:40.597 [2024-11-19 08:48:19.826842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:40.597 [2024-11-19 08:48:19.826857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:40.597 [2024-11-19 08:48:19.826869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.597 [2024-11-19 08:48:19.826883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:40.597 [2024-11-19 08:48:19.826909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:40.597 [2024-11-19 08:48:19.826922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.597 [2024-11-19 08:48:19.826933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:40.597 [2024-11-19 08:48:19.826946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:40.597 [2024-11-19 08:48:19.826957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.597 [2024-11-19 08:48:19.826969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:40.597 [2024-11-19 08:48:19.826980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:40.597 [2024-11-19 08:48:19.826993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.597 [2024-11-19 08:48:19.827003] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:40.597 [2024-11-19 08:48:19.827017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:40.597 [2024-11-19 08:48:19.827028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:40.597 [2024-11-19 08:48:19.827044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:40.597 [2024-11-19 08:48:19.827056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:40.597 [2024-11-19 08:48:19.827071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:40.597 [2024-11-19 08:48:19.827082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:40.597 [2024-11-19 08:48:19.827095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:40.597 [2024-11-19 08:48:19.827106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:40.597 [2024-11-19 08:48:19.827119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:40.597 [2024-11-19 08:48:19.827135] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:40.597 [2024-11-19 08:48:19.827168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:40.597 [2024-11-19 08:48:19.827183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:40.597 [2024-11-19 08:48:19.827197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:40.597 [2024-11-19 08:48:19.827209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:40.597 [2024-11-19 08:48:19.827223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:40.597 [2024-11-19 08:48:19.827236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:40.597 [2024-11-19 08:48:19.827250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:40.597 [2024-11-19 08:48:19.827262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:40.597 [2024-11-19 08:48:19.827276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:40.597 [2024-11-19 08:48:19.827288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:40.597 [2024-11-19 08:48:19.827303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:40.597 [2024-11-19 08:48:19.827315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:40.597 [2024-11-19 08:48:19.827330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:40.597 [2024-11-19 08:48:19.827342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:40.597 [2024-11-19 08:48:19.827359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:40.598 [2024-11-19 08:48:19.827371] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:40.598 [2024-11-19 08:48:19.827386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:40.598 [2024-11-19 08:48:19.827399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:40.598 [2024-11-19 08:48:19.827413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:40.598 [2024-11-19 08:48:19.827425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:40.598 [2024-11-19 08:48:19.827439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:40.598 [2024-11-19 08:48:19.827453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:40.598 [2024-11-19 08:48:19.827467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:40.598 [2024-11-19 08:48:19.827480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.115 ms 00:29:40.598 [2024-11-19 08:48:19.827493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:40.598 [2024-11-19 08:48:19.827562] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:40.598 [2024-11-19 08:48:19.827583] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:43.887 [2024-11-19 08:48:22.905610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.887 [2024-11-19 08:48:22.905697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:43.887 [2024-11-19 08:48:22.905722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3078.067 ms 00:29:43.887 [2024-11-19 08:48:22.905737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.887 [2024-11-19 08:48:22.937830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.887 [2024-11-19 08:48:22.938166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:43.887 [2024-11-19 08:48:22.938200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.842 ms 00:29:43.887 [2024-11-19 08:48:22.938217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.887 [2024-11-19 08:48:22.938341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.887 [2024-11-19 08:48:22.938381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:43.887 [2024-11-19 08:48:22.938395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:43.887 [2024-11-19 08:48:22.938412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.887 [2024-11-19 08:48:22.975893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.887 [2024-11-19 08:48:22.976138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:43.887 [2024-11-19 08:48:22.976186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.409 ms 00:29:43.887 [2024-11-19 08:48:22.976205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.887 [2024-11-19 08:48:22.976257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.887 [2024-11-19 08:48:22.976279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:43.887 [2024-11-19 08:48:22.976292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:43.887 [2024-11-19 08:48:22.976305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.887 [2024-11-19 08:48:22.976770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.887 [2024-11-19 08:48:22.976811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:43.887 [2024-11-19 08:48:22.976825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.388 ms 00:29:43.887 [2024-11-19 08:48:22.976838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.887 [2024-11-19 08:48:22.976899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.887 [2024-11-19 08:48:22.976918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:43.888 [2024-11-19 08:48:22.976933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:43.888 [2024-11-19 08:48:22.976948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.888 [2024-11-19 08:48:22.993891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.888 [2024-11-19 08:48:22.994133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:43.888 [2024-11-19 08:48:22.994164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.919 ms 00:29:43.888 [2024-11-19 08:48:22.994180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.888 [2024-11-19 08:48:23.006762] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:43.888 [2024-11-19 08:48:23.007706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.888 [2024-11-19 08:48:23.008016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:43.888 [2024-11-19 08:48:23.008053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.395 ms 00:29:43.888 [2024-11-19 08:48:23.008068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.888 [2024-11-19 08:48:23.048364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.888 [2024-11-19 08:48:23.048577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:43.888 [2024-11-19 08:48:23.048615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.253 ms 00:29:43.888 [2024-11-19 08:48:23.048667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.888 [2024-11-19 08:48:23.048783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.888 [2024-11-19 08:48:23.048807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:43.888 [2024-11-19 08:48:23.048826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:29:43.888 [2024-11-19 08:48:23.048839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.888 [2024-11-19 08:48:23.078004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.888 [2024-11-19 08:48:23.078223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:43.888 [2024-11-19 08:48:23.078260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.078 ms 00:29:43.888 [2024-11-19 08:48:23.078275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.888 [2024-11-19 08:48:23.107364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.888 [2024-11-19 08:48:23.107405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:43.888 [2024-11-19 08:48:23.107442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.008 ms 00:29:43.888 [2024-11-19 08:48:23.107453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:43.888 [2024-11-19 08:48:23.108276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:43.888 [2024-11-19 08:48:23.108462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:43.888 [2024-11-19 08:48:23.108526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.760 ms 00:29:43.888 [2024-11-19 08:48:23.108539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:44.148 [2024-11-19 08:48:23.212038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:44.148 [2024-11-19 08:48:23.212298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:44.148 [2024-11-19 08:48:23.212342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 103.405 ms 00:29:44.148 [2024-11-19 08:48:23.212357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:44.148 [2024-11-19 08:48:23.243273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:44.148 [2024-11-19 08:48:23.243316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:44.148 [2024-11-19 08:48:23.243378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.804 ms 00:29:44.148 [2024-11-19 08:48:23.243390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:44.148 [2024-11-19 08:48:23.272528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:44.148 [2024-11-19 08:48:23.272751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:44.148 [2024-11-19 08:48:23.272784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.089 ms 00:29:44.148 [2024-11-19 08:48:23.272797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:44.148 [2024-11-19 08:48:23.302553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:44.148 [2024-11-19 08:48:23.302594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:44.148 [2024-11-19 08:48:23.302661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.688 ms 00:29:44.148 [2024-11-19 08:48:23.302675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:44.148 [2024-11-19 08:48:23.302746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:44.148 [2024-11-19 08:48:23.302764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:44.148 [2024-11-19 08:48:23.302781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:44.148 [2024-11-19 08:48:23.302792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:44.148 [2024-11-19 08:48:23.302931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:44.148 [2024-11-19 08:48:23.302956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:44.148 [2024-11-19 08:48:23.302990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:29:44.148 [2024-11-19 08:48:23.303018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:44.148 [2024-11-19 08:48:23.304172] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3490.678 ms, result 0 00:29:44.148 { 00:29:44.148 "name": "ftl", 00:29:44.148 "uuid": "dbcbb735-53c4-4435-8bd6-6f167c6b6c26" 00:29:44.148 } 00:29:44.148 08:48:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:44.408 [2024-11-19 08:48:23.623301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.408 08:48:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:44.667 08:48:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:44.926 [2024-11-19 08:48:24.183999] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:44.926 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:45.185 [2024-11-19 08:48:24.473806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:45.444 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:45.703 Fill FTL, iteration 1 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81320 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81320 /var/tmp/spdk.tgt.sock 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81320 ']' 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:45.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:45.703 08:48:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:45.704 [2024-11-19 08:48:24.970415] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:45.704 [2024-11-19 08:48:24.970786] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81320 ] 00:29:45.963 [2024-11-19 08:48:25.133544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.963 [2024-11-19 08:48:25.222218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.899 08:48:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:46.899 08:48:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:46.899 08:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:47.158 ftln1 00:29:47.158 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:47.158 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:47.416 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:47.416 08:48:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81320 00:29:47.416 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81320 ']' 00:29:47.416 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81320 00:29:47.416 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:47.416 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.416 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81320 00:29:47.416 killing process with pid 81320 00:29:47.416 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:47.416 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:47.416 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81320' 00:29:47.416 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81320 00:29:47.416 08:48:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81320 00:29:49.320 08:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:49.320 08:48:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:49.320 [2024-11-19 08:48:28.603642] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:49.320 [2024-11-19 08:48:28.603821] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81367 ] 00:29:49.578 [2024-11-19 08:48:28.787432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.836 [2024-11-19 08:48:28.888543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.227  [2024-11-19T08:48:31.460Z] Copying: 200/1024 [MB] (200 MBps) [2024-11-19T08:48:32.396Z] Copying: 403/1024 [MB] (203 MBps) [2024-11-19T08:48:33.332Z] Copying: 612/1024 [MB] (209 MBps) [2024-11-19T08:48:34.267Z] Copying: 825/1024 [MB] (213 MBps) [2024-11-19T08:48:35.641Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:29:56.345 00:29:56.345 Calculate MD5 checksum, iteration 1 00:29:56.345 08:48:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:56.345 08:48:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:56.345 08:48:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:56.345 08:48:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:56.345 08:48:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:56.345 08:48:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:56.345 08:48:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:56.345 08:48:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:56.346 [2024-11-19 08:48:35.330437] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:29:56.346 [2024-11-19 08:48:35.330679] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81437 ] 00:29:56.346 [2024-11-19 08:48:35.511209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.346 [2024-11-19 08:48:35.612277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.720  [2024-11-19T08:48:38.013Z] Copying: 523/1024 [MB] (523 MBps) [2024-11-19T08:48:38.950Z] Copying: 1024/1024 [MB] (average 520 MBps) 00:29:59.654 00:29:59.654 08:48:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:59.654 08:48:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:02.187 08:48:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:02.187 Fill FTL, iteration 2 00:30:02.187 08:48:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c654c2b87f819d89f362c5f0e9fc0f78 00:30:02.187 08:48:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:02.187 08:48:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:02.187 08:48:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:02.187 08:48:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:02.187 08:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:02.187 08:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:02.187 08:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:02.187 08:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:02.187 08:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:02.187 [2024-11-19 08:48:41.119351] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:02.187 [2024-11-19 08:48:41.119518] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81494 ] 00:30:02.187 [2024-11-19 08:48:41.304778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.187 [2024-11-19 08:48:41.427083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.567  [2024-11-19T08:48:44.239Z] Copying: 207/1024 [MB] (207 MBps) [2024-11-19T08:48:45.177Z] Copying: 415/1024 [MB] (208 MBps) [2024-11-19T08:48:46.114Z] Copying: 624/1024 [MB] (209 MBps) [2024-11-19T08:48:47.051Z] Copying: 829/1024 [MB] (205 MBps) [2024-11-19T08:48:47.989Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:30:08.693 00:30:08.693 Calculate MD5 checksum, iteration 2 00:30:08.693 08:48:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:08.693 08:48:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:08.693 08:48:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:08.693 08:48:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:08.693 08:48:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:08.693 08:48:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:08.693 08:48:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:08.693 08:48:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:08.694 [2024-11-19 08:48:47.866091] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:08.694 [2024-11-19 08:48:47.866232] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81564 ] 00:30:08.952 [2024-11-19 08:48:48.039784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.952 [2024-11-19 08:48:48.141795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.857  [2024-11-19T08:48:51.088Z] Copying: 494/1024 [MB] (494 MBps) [2024-11-19T08:48:51.088Z] Copying: 999/1024 [MB] (505 MBps) [2024-11-19T08:48:52.024Z] Copying: 1024/1024 [MB] (average 499 MBps) 00:30:12.728 00:30:12.728 08:48:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:12.728 08:48:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:15.260 08:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:15.260 08:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d1c97df0afbd369d9f0469c5f2604363 00:30:15.260 08:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:15.260 08:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:15.260 08:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:15.260 [2024-11-19 08:48:54.288520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.260 [2024-11-19 08:48:54.288837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:15.260 [2024-11-19 08:48:54.288870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:15.260 [2024-11-19 08:48:54.288884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.260 [2024-11-19 08:48:54.288933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.260 [2024-11-19 08:48:54.288950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:15.260 [2024-11-19 08:48:54.288964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:15.260 [2024-11-19 08:48:54.288984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.260 [2024-11-19 08:48:54.289013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.260 [2024-11-19 08:48:54.289028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:15.260 [2024-11-19 08:48:54.289041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:15.260 [2024-11-19 08:48:54.289053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.260 [2024-11-19 08:48:54.289144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.612 ms, result 0 00:30:15.260 true 00:30:15.260 08:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:15.519 { 00:30:15.519 "name": "ftl", 00:30:15.519 "properties": [ 00:30:15.519 { 00:30:15.519 "name": "superblock_version", 00:30:15.519 "value": 5, 00:30:15.519 "read-only": true 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "name": "base_device", 00:30:15.519 "bands": [ 00:30:15.519 { 00:30:15.519 "id": 0, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 1, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 2, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 3, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 4, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 5, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 6, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 7, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 8, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 9, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 10, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 11, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 12, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 13, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 14, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 15, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 16, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 17, 00:30:15.519 "state": "FREE", 00:30:15.519 "validity": 0.0 00:30:15.519 } 00:30:15.519 ], 00:30:15.519 "read-only": true 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "name": "cache_device", 00:30:15.519 "type": "bdev", 00:30:15.519 "chunks": [ 00:30:15.519 { 00:30:15.519 "id": 0, 00:30:15.519 "state": "INACTIVE", 00:30:15.519 "utilization": 0.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 1, 00:30:15.519 "state": "CLOSED", 00:30:15.519 "utilization": 1.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 2, 00:30:15.519 "state": "CLOSED", 00:30:15.519 "utilization": 1.0 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 3, 00:30:15.519 "state": "OPEN", 00:30:15.519 "utilization": 0.001953125 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "id": 4, 00:30:15.519 "state": "OPEN", 00:30:15.519 "utilization": 0.0 00:30:15.519 } 00:30:15.519 ], 00:30:15.519 "read-only": true 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "name": "verbose_mode", 00:30:15.519 "value": true, 00:30:15.519 "unit": "", 00:30:15.519 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:15.519 }, 00:30:15.519 { 00:30:15.519 "name": "prep_upgrade_on_shutdown", 00:30:15.519 "value": false, 00:30:15.519 "unit": "", 00:30:15.519 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:15.519 } 00:30:15.519 ] 00:30:15.519 } 00:30:15.519 08:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:15.778 [2024-11-19 08:48:54.889250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.778 [2024-11-19 08:48:54.889304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:15.778 [2024-11-19 08:48:54.889340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:15.778 [2024-11-19 08:48:54.889351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.778 [2024-11-19 08:48:54.889383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.778 [2024-11-19 08:48:54.889398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:15.778 [2024-11-19 08:48:54.889410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:15.778 [2024-11-19 08:48:54.889420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.778 [2024-11-19 08:48:54.889446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.778 [2024-11-19 08:48:54.889459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:15.778 [2024-11-19 08:48:54.889470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:15.778 [2024-11-19 08:48:54.889480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.778 [2024-11-19 08:48:54.889605] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.288 ms, result 0 00:30:15.778 true 00:30:15.778 08:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:15.778 08:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:15.778 08:48:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:16.036 08:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:16.036 08:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:16.036 08:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:16.295 [2024-11-19 08:48:55.505729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.295 [2024-11-19 08:48:55.505780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:16.295 [2024-11-19 08:48:55.505799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:16.295 [2024-11-19 08:48:55.505810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.295 [2024-11-19 08:48:55.505844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.295 [2024-11-19 08:48:55.505859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:16.295 [2024-11-19 08:48:55.505871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:16.295 [2024-11-19 08:48:55.505882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.295 [2024-11-19 08:48:55.505907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.295 [2024-11-19 08:48:55.505921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:16.295 [2024-11-19 08:48:55.505932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:16.295 [2024-11-19 08:48:55.505942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.295 [2024-11-19 08:48:55.506012] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.270 ms, result 0 00:30:16.295 true 00:30:16.295 08:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:16.554 { 00:30:16.554 "name": "ftl", 00:30:16.554 "properties": [ 00:30:16.554 { 00:30:16.554 "name": "superblock_version", 00:30:16.554 "value": 5, 00:30:16.554 "read-only": true 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "name": "base_device", 00:30:16.554 "bands": [ 00:30:16.554 { 00:30:16.554 "id": 0, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 1, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 2, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 3, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 4, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 5, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 6, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 7, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 8, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 9, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 10, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 11, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 12, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 13, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 14, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 15, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 16, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 17, 00:30:16.554 "state": "FREE", 00:30:16.554 "validity": 0.0 00:30:16.554 } 00:30:16.554 ], 00:30:16.554 "read-only": true 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "name": "cache_device", 00:30:16.554 "type": "bdev", 00:30:16.554 "chunks": [ 00:30:16.554 { 00:30:16.554 "id": 0, 00:30:16.554 "state": "INACTIVE", 00:30:16.554 "utilization": 0.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 1, 00:30:16.554 "state": "CLOSED", 00:30:16.554 "utilization": 1.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 2, 00:30:16.554 "state": "CLOSED", 00:30:16.554 "utilization": 1.0 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 3, 00:30:16.554 "state": "OPEN", 00:30:16.554 "utilization": 0.001953125 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "id": 4, 00:30:16.554 "state": "OPEN", 00:30:16.554 "utilization": 0.0 00:30:16.554 } 00:30:16.554 ], 00:30:16.554 "read-only": true 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "name": "verbose_mode", 00:30:16.554 "value": true, 00:30:16.554 "unit": "", 00:30:16.554 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:16.554 }, 00:30:16.554 { 00:30:16.554 "name": "prep_upgrade_on_shutdown", 00:30:16.554 "value": true, 00:30:16.554 "unit": "", 00:30:16.554 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:16.554 } 00:30:16.554 ] 00:30:16.555 } 00:30:16.555 08:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:16.555 08:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81192 ]] 00:30:16.555 08:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81192 00:30:16.555 08:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81192 ']' 00:30:16.555 08:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81192 00:30:16.555 08:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:16.555 08:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:16.555 08:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81192 00:30:16.555 killing process with pid 81192 00:30:16.555 08:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:16.555 08:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:16.555 08:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81192' 00:30:16.555 08:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81192 00:30:16.555 08:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81192 00:30:17.491 [2024-11-19 08:48:56.703858] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:17.491 [2024-11-19 08:48:56.719166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.491 [2024-11-19 08:48:56.719212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:17.491 [2024-11-19 08:48:56.719248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:17.491 [2024-11-19 08:48:56.719259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.491 [2024-11-19 08:48:56.719294] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:17.491 [2024-11-19 08:48:56.722559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.491 [2024-11-19 08:48:56.722783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:17.491 [2024-11-19 08:48:56.722827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.243 ms 00:30:17.491 [2024-11-19 08:48:56.722840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.541 [2024-11-19 08:49:05.533206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.541 [2024-11-19 08:49:05.533271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:27.541 [2024-11-19 08:49:05.533310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8810.384 ms 00:30:27.541 [2024-11-19 08:49:05.533321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.541 [2024-11-19 08:49:05.534595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.541 [2024-11-19 08:49:05.534642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:27.541 [2024-11-19 08:49:05.534658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.245 ms 00:30:27.541 [2024-11-19 08:49:05.534671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.541 [2024-11-19 08:49:05.536001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.541 [2024-11-19 08:49:05.536040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:27.541 [2024-11-19 08:49:05.536056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.288 ms 00:30:27.541 [2024-11-19 08:49:05.536068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.541 [2024-11-19 08:49:05.549183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.541 [2024-11-19 08:49:05.549386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:27.541 [2024-11-19 08:49:05.549432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.057 ms 00:30:27.541 [2024-11-19 08:49:05.549444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.541 [2024-11-19 08:49:05.557280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.541 [2024-11-19 08:49:05.557321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:27.541 [2024-11-19 08:49:05.557353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.789 ms 00:30:27.541 [2024-11-19 08:49:05.557364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.541 [2024-11-19 08:49:05.557455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.541 [2024-11-19 08:49:05.557473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:27.541 [2024-11-19 08:49:05.557501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:30:27.541 [2024-11-19 08:49:05.557519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.541 [2024-11-19 08:49:05.570441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.541 [2024-11-19 08:49:05.570661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:27.541 [2024-11-19 08:49:05.570688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.901 ms 00:30:27.541 [2024-11-19 08:49:05.570700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.541 [2024-11-19 08:49:05.583029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.541 [2024-11-19 08:49:05.583067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:27.541 [2024-11-19 08:49:05.583099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.283 ms 00:30:27.541 [2024-11-19 08:49:05.583110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.541 [2024-11-19 08:49:05.596128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.541 [2024-11-19 08:49:05.596167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:27.541 [2024-11-19 08:49:05.596183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.979 ms 00:30:27.541 [2024-11-19 08:49:05.596193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.541 [2024-11-19 08:49:05.608729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.541 [2024-11-19 08:49:05.608765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:27.541 [2024-11-19 08:49:05.608797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.446 ms 00:30:27.541 [2024-11-19 08:49:05.608807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.541 [2024-11-19 08:49:05.608844] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:27.541 [2024-11-19 08:49:05.608866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:27.541 [2024-11-19 08:49:05.608879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:27.541 [2024-11-19 08:49:05.608905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:27.541 [2024-11-19 08:49:05.608916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:27.541 [2024-11-19 08:49:05.608927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:27.541 [2024-11-19 08:49:05.608938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:27.541 [2024-11-19 08:49:05.608948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:27.541 [2024-11-19 08:49:05.608959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:27.541 [2024-11-19 08:49:05.608969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:27.541 [2024-11-19 08:49:05.608980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:27.541 [2024-11-19 08:49:05.608991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:27.541 [2024-11-19 08:49:05.609002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:27.542 [2024-11-19 08:49:05.609012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:27.542 [2024-11-19 08:49:05.609023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:27.542 [2024-11-19 08:49:05.609034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:27.542 [2024-11-19 08:49:05.609044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:27.542 [2024-11-19 08:49:05.609055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:27.542 [2024-11-19 08:49:05.609065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:27.542 [2024-11-19 08:49:05.609078] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:27.542 [2024-11-19 08:49:05.609089] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: dbcbb735-53c4-4435-8bd6-6f167c6b6c26 00:30:27.542 [2024-11-19 08:49:05.609100] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:27.542 [2024-11-19 08:49:05.609110] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:27.542 [2024-11-19 08:49:05.609120] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:27.542 [2024-11-19 08:49:05.609131] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:27.542 [2024-11-19 08:49:05.609141] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:27.542 [2024-11-19 08:49:05.609151] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:27.542 [2024-11-19 08:49:05.609166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:27.542 [2024-11-19 08:49:05.609175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:27.542 [2024-11-19 08:49:05.609185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:27.542 [2024-11-19 08:49:05.609196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.542 [2024-11-19 08:49:05.609206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:27.542 [2024-11-19 08:49:05.609224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.353 ms 00:30:27.542 [2024-11-19 08:49:05.609235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.626642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.542 [2024-11-19 08:49:05.626920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:27.542 [2024-11-19 08:49:05.627045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.368 ms 00:30:27.542 [2024-11-19 08:49:05.627098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.627701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.542 [2024-11-19 08:49:05.627842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:27.542 [2024-11-19 08:49:05.627954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.442 ms 00:30:27.542 [2024-11-19 08:49:05.628092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.685690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.542 [2024-11-19 08:49:05.685949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:27.542 [2024-11-19 08:49:05.686081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.542 [2024-11-19 08:49:05.686156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.686338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.542 [2024-11-19 08:49:05.686385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:27.542 [2024-11-19 08:49:05.686425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.542 [2024-11-19 08:49:05.686462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.686607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.542 [2024-11-19 08:49:05.686708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:27.542 [2024-11-19 08:49:05.686755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.542 [2024-11-19 08:49:05.686871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.686949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.542 [2024-11-19 08:49:05.687064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:27.542 [2024-11-19 08:49:05.687177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.542 [2024-11-19 08:49:05.687226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.794682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.542 [2024-11-19 08:49:05.794964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:27.542 [2024-11-19 08:49:05.795107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.542 [2024-11-19 08:49:05.795159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.877281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.542 [2024-11-19 08:49:05.877335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:27.542 [2024-11-19 08:49:05.877369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.542 [2024-11-19 08:49:05.877380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.877497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.542 [2024-11-19 08:49:05.877515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:27.542 [2024-11-19 08:49:05.877527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.542 [2024-11-19 08:49:05.877537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.877589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.542 [2024-11-19 08:49:05.877611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:27.542 [2024-11-19 08:49:05.877644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.542 [2024-11-19 08:49:05.877675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.877808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.542 [2024-11-19 08:49:05.877827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:27.542 [2024-11-19 08:49:05.877839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.542 [2024-11-19 08:49:05.877850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.877895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.542 [2024-11-19 08:49:05.877912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:27.542 [2024-11-19 08:49:05.877929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.542 [2024-11-19 08:49:05.877940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.877983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.542 [2024-11-19 08:49:05.877998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:27.542 [2024-11-19 08:49:05.878009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.542 [2024-11-19 08:49:05.878036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.878104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.542 [2024-11-19 08:49:05.878126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:27.542 [2024-11-19 08:49:05.878138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.542 [2024-11-19 08:49:05.878149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.542 [2024-11-19 08:49:05.878289] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9159.172 ms, result 0 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81781 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81781 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81781 ']' 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:30.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:30.076 08:49:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:30.076 [2024-11-19 08:49:08.941097] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:30.076 [2024-11-19 08:49:08.941286] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81781 ] 00:30:30.076 [2024-11-19 08:49:09.123094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.076 [2024-11-19 08:49:09.221290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.016 [2024-11-19 08:49:10.064033] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:31.016 [2024-11-19 08:49:10.064154] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:31.016 [2024-11-19 08:49:10.216238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.016 [2024-11-19 08:49:10.216320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:31.016 [2024-11-19 08:49:10.216342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:31.016 [2024-11-19 08:49:10.216355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.016 [2024-11-19 08:49:10.216425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.016 [2024-11-19 08:49:10.216444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:31.016 [2024-11-19 08:49:10.216457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:30:31.016 [2024-11-19 08:49:10.216468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.016 [2024-11-19 08:49:10.216510] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:31.016 [2024-11-19 08:49:10.217456] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:31.016 [2024-11-19 08:49:10.217501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.016 [2024-11-19 08:49:10.217517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:31.016 [2024-11-19 08:49:10.217530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.005 ms 00:30:31.016 [2024-11-19 08:49:10.217542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.016 [2024-11-19 08:49:10.218741] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:31.016 [2024-11-19 08:49:10.235567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.016 [2024-11-19 08:49:10.235626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:31.016 [2024-11-19 08:49:10.235654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.827 ms 00:30:31.016 [2024-11-19 08:49:10.235666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.016 [2024-11-19 08:49:10.235751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.016 [2024-11-19 08:49:10.235783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:31.016 [2024-11-19 08:49:10.235796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:31.016 [2024-11-19 08:49:10.235808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.016 [2024-11-19 08:49:10.240479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.016 [2024-11-19 08:49:10.240543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:31.016 [2024-11-19 08:49:10.240590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.563 ms 00:30:31.016 [2024-11-19 08:49:10.240601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.016 [2024-11-19 08:49:10.240724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.016 [2024-11-19 08:49:10.240748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:31.016 [2024-11-19 08:49:10.240761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:31.016 [2024-11-19 08:49:10.240773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.016 [2024-11-19 08:49:10.240843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.016 [2024-11-19 08:49:10.240867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:31.016 [2024-11-19 08:49:10.240880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:31.016 [2024-11-19 08:49:10.240891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.016 [2024-11-19 08:49:10.240928] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:31.016 [2024-11-19 08:49:10.245334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.016 [2024-11-19 08:49:10.245379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:31.016 [2024-11-19 08:49:10.245400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.415 ms 00:30:31.016 [2024-11-19 08:49:10.245412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.016 [2024-11-19 08:49:10.245448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.016 [2024-11-19 08:49:10.245465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:31.016 [2024-11-19 08:49:10.245477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:31.016 [2024-11-19 08:49:10.245489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.016 [2024-11-19 08:49:10.245568] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:31.016 [2024-11-19 08:49:10.245601] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:31.016 [2024-11-19 08:49:10.245678] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:31.016 [2024-11-19 08:49:10.245702] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:31.016 [2024-11-19 08:49:10.245830] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:31.016 [2024-11-19 08:49:10.245846] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:31.016 [2024-11-19 08:49:10.245861] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:31.016 [2024-11-19 08:49:10.245875] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:31.016 [2024-11-19 08:49:10.245895] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:31.016 [2024-11-19 08:49:10.245907] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:31.016 [2024-11-19 08:49:10.245919] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:31.016 [2024-11-19 08:49:10.245930] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:31.016 [2024-11-19 08:49:10.245941] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:31.016 [2024-11-19 08:49:10.245954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.016 [2024-11-19 08:49:10.245965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:31.016 [2024-11-19 08:49:10.245977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.390 ms 00:30:31.016 [2024-11-19 08:49:10.245988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.016 [2024-11-19 08:49:10.246093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.016 [2024-11-19 08:49:10.246108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:31.016 [2024-11-19 08:49:10.246125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:30:31.016 [2024-11-19 08:49:10.246136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.016 [2024-11-19 08:49:10.246279] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:31.016 [2024-11-19 08:49:10.246304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:31.016 [2024-11-19 08:49:10.246318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:31.016 [2024-11-19 08:49:10.246330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:31.016 [2024-11-19 08:49:10.246342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:31.016 [2024-11-19 08:49:10.246353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:31.016 [2024-11-19 08:49:10.246363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:31.016 [2024-11-19 08:49:10.246374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:31.016 [2024-11-19 08:49:10.246385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:31.016 [2024-11-19 08:49:10.246395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:31.016 [2024-11-19 08:49:10.246406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:31.016 [2024-11-19 08:49:10.246416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:31.016 [2024-11-19 08:49:10.246427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:31.017 [2024-11-19 08:49:10.246437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:31.017 [2024-11-19 08:49:10.246448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:31.017 [2024-11-19 08:49:10.246460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:31.017 [2024-11-19 08:49:10.246471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:31.017 [2024-11-19 08:49:10.246481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:31.017 [2024-11-19 08:49:10.246491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:31.017 [2024-11-19 08:49:10.246502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:31.017 [2024-11-19 08:49:10.246513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:31.017 [2024-11-19 08:49:10.246523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:31.017 [2024-11-19 08:49:10.246533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:31.017 [2024-11-19 08:49:10.246544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:31.017 [2024-11-19 08:49:10.246554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:31.017 [2024-11-19 08:49:10.246579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:31.017 [2024-11-19 08:49:10.246590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:31.017 [2024-11-19 08:49:10.246601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:31.017 [2024-11-19 08:49:10.246628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:31.017 [2024-11-19 08:49:10.246640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:31.017 [2024-11-19 08:49:10.246651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:31.017 [2024-11-19 08:49:10.246662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:31.017 [2024-11-19 08:49:10.246672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:31.017 [2024-11-19 08:49:10.246683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:31.017 [2024-11-19 08:49:10.246693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:31.017 [2024-11-19 08:49:10.246703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:31.017 [2024-11-19 08:49:10.246714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:31.017 [2024-11-19 08:49:10.246724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:31.017 [2024-11-19 08:49:10.246735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:31.017 [2024-11-19 08:49:10.246745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:31.017 [2024-11-19 08:49:10.246756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:31.017 [2024-11-19 08:49:10.246766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:31.017 [2024-11-19 08:49:10.246776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:31.017 [2024-11-19 08:49:10.246787] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:31.017 [2024-11-19 08:49:10.246798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:31.017 [2024-11-19 08:49:10.246809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:31.017 [2024-11-19 08:49:10.246826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:31.017 [2024-11-19 08:49:10.246838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:31.017 [2024-11-19 08:49:10.246850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:31.017 [2024-11-19 08:49:10.246860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:31.017 [2024-11-19 08:49:10.246871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:31.017 [2024-11-19 08:49:10.246881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:31.017 [2024-11-19 08:49:10.246891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:31.017 [2024-11-19 08:49:10.246904] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:31.017 [2024-11-19 08:49:10.246918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:31.017 [2024-11-19 08:49:10.246931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:31.017 [2024-11-19 08:49:10.246943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:31.017 [2024-11-19 08:49:10.246954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:31.017 [2024-11-19 08:49:10.246966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:31.017 [2024-11-19 08:49:10.246977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:31.017 [2024-11-19 08:49:10.246988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:31.017 [2024-11-19 08:49:10.246999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:31.017 [2024-11-19 08:49:10.247010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:31.017 [2024-11-19 08:49:10.247022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:31.017 [2024-11-19 08:49:10.247033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:31.017 [2024-11-19 08:49:10.247044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:31.017 [2024-11-19 08:49:10.247056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:31.017 [2024-11-19 08:49:10.247067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:31.017 [2024-11-19 08:49:10.247079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:31.017 [2024-11-19 08:49:10.247091] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:31.017 [2024-11-19 08:49:10.247103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:31.017 [2024-11-19 08:49:10.247117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:31.017 [2024-11-19 08:49:10.247129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:31.017 [2024-11-19 08:49:10.247140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:31.017 [2024-11-19 08:49:10.247152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:31.017 [2024-11-19 08:49:10.247164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:31.017 [2024-11-19 08:49:10.247176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:31.017 [2024-11-19 08:49:10.247188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.956 ms 00:30:31.017 [2024-11-19 08:49:10.247198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:31.017 [2024-11-19 08:49:10.247261] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:31.017 [2024-11-19 08:49:10.247288] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:33.545 [2024-11-19 08:49:12.239498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.545 [2024-11-19 08:49:12.239571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:33.545 [2024-11-19 08:49:12.239594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1992.248 ms 00:30:33.545 [2024-11-19 08:49:12.239638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.545 [2024-11-19 08:49:12.272124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.545 [2024-11-19 08:49:12.272195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:33.545 [2024-11-19 08:49:12.272216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.152 ms 00:30:33.545 [2024-11-19 08:49:12.272228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.545 [2024-11-19 08:49:12.272361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.545 [2024-11-19 08:49:12.272381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:33.545 [2024-11-19 08:49:12.272394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:33.545 [2024-11-19 08:49:12.272404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.545 [2024-11-19 08:49:12.312858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.545 [2024-11-19 08:49:12.312911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:33.545 [2024-11-19 08:49:12.312934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.398 ms 00:30:33.545 [2024-11-19 08:49:12.312945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.545 [2024-11-19 08:49:12.313008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.545 [2024-11-19 08:49:12.313024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:33.545 [2024-11-19 08:49:12.313036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:33.545 [2024-11-19 08:49:12.313047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.545 [2024-11-19 08:49:12.313425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.545 [2024-11-19 08:49:12.313444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:33.545 [2024-11-19 08:49:12.313456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.290 ms 00:30:33.545 [2024-11-19 08:49:12.313472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.545 [2024-11-19 08:49:12.313528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.545 [2024-11-19 08:49:12.313543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:33.545 [2024-11-19 08:49:12.313554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:33.545 [2024-11-19 08:49:12.313564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.545 [2024-11-19 08:49:12.331670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.545 [2024-11-19 08:49:12.331723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:33.546 [2024-11-19 08:49:12.331766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.080 ms 00:30:33.546 [2024-11-19 08:49:12.331803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.348256] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:33.546 [2024-11-19 08:49:12.348451] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:33.546 [2024-11-19 08:49:12.348493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.546 [2024-11-19 08:49:12.348520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:33.546 [2024-11-19 08:49:12.348540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.538 ms 00:30:33.546 [2024-11-19 08:49:12.348552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.366827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.546 [2024-11-19 08:49:12.366871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:33.546 [2024-11-19 08:49:12.366890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.225 ms 00:30:33.546 [2024-11-19 08:49:12.366902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.382435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.546 [2024-11-19 08:49:12.382506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:33.546 [2024-11-19 08:49:12.382539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.482 ms 00:30:33.546 [2024-11-19 08:49:12.382549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.398267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.546 [2024-11-19 08:49:12.398310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:33.546 [2024-11-19 08:49:12.398327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.644 ms 00:30:33.546 [2024-11-19 08:49:12.398338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.399279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.546 [2024-11-19 08:49:12.399481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:33.546 [2024-11-19 08:49:12.399525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.810 ms 00:30:33.546 [2024-11-19 08:49:12.399537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.482804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.546 [2024-11-19 08:49:12.482873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:33.546 [2024-11-19 08:49:12.482892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 83.229 ms 00:30:33.546 [2024-11-19 08:49:12.482903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.494698] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:33.546 [2024-11-19 08:49:12.495315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.546 [2024-11-19 08:49:12.495391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:33.546 [2024-11-19 08:49:12.495407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.317 ms 00:30:33.546 [2024-11-19 08:49:12.495419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.495519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.546 [2024-11-19 08:49:12.495540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:33.546 [2024-11-19 08:49:12.495554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:33.546 [2024-11-19 08:49:12.495570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.495685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.546 [2024-11-19 08:49:12.495706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:33.546 [2024-11-19 08:49:12.495719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:33.546 [2024-11-19 08:49:12.495731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.495781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.546 [2024-11-19 08:49:12.495799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:33.546 [2024-11-19 08:49:12.495818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:33.546 [2024-11-19 08:49:12.495830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.495874] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:33.546 [2024-11-19 08:49:12.495891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.546 [2024-11-19 08:49:12.495902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:33.546 [2024-11-19 08:49:12.495914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:33.546 [2024-11-19 08:49:12.495926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.525763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.546 [2024-11-19 08:49:12.525811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:33.546 [2024-11-19 08:49:12.525855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.811 ms 00:30:33.546 [2024-11-19 08:49:12.525866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.525953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.546 [2024-11-19 08:49:12.525971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:33.546 [2024-11-19 08:49:12.525982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:30:33.546 [2024-11-19 08:49:12.525993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.546 [2024-11-19 08:49:12.527228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2310.479 ms, result 0 00:30:33.546 [2024-11-19 08:49:12.542236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.546 [2024-11-19 08:49:12.558235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:33.546 [2024-11-19 08:49:12.566955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:33.546 08:49:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.546 08:49:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:33.546 08:49:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:33.546 08:49:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:33.546 08:49:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:33.804 [2024-11-19 08:49:12.863215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.805 [2024-11-19 08:49:12.863267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:33.805 [2024-11-19 08:49:12.863288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:33.805 [2024-11-19 08:49:12.863306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.805 [2024-11-19 08:49:12.863342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.805 [2024-11-19 08:49:12.863358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:33.805 [2024-11-19 08:49:12.863370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:33.805 [2024-11-19 08:49:12.863382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.805 [2024-11-19 08:49:12.863410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:33.805 [2024-11-19 08:49:12.863423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:33.805 [2024-11-19 08:49:12.863435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:33.805 [2024-11-19 08:49:12.863446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:33.805 [2024-11-19 08:49:12.863525] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.312 ms, result 0 00:30:33.805 true 00:30:33.805 08:49:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:34.063 { 00:30:34.063 "name": "ftl", 00:30:34.063 "properties": [ 00:30:34.063 { 00:30:34.063 "name": "superblock_version", 00:30:34.063 "value": 5, 00:30:34.063 "read-only": true 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "name": "base_device", 00:30:34.063 "bands": [ 00:30:34.063 { 00:30:34.063 "id": 0, 00:30:34.063 "state": "CLOSED", 00:30:34.063 "validity": 1.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 1, 00:30:34.063 "state": "CLOSED", 00:30:34.063 "validity": 1.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 2, 00:30:34.063 "state": "CLOSED", 00:30:34.063 "validity": 0.007843137254901933 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 3, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 4, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 5, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 6, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 7, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 8, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 9, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 10, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 11, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 12, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 13, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 14, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 15, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 16, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 17, 00:30:34.063 "state": "FREE", 00:30:34.063 "validity": 0.0 00:30:34.063 } 00:30:34.063 ], 00:30:34.063 "read-only": true 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "name": "cache_device", 00:30:34.063 "type": "bdev", 00:30:34.063 "chunks": [ 00:30:34.063 { 00:30:34.063 "id": 0, 00:30:34.063 "state": "INACTIVE", 00:30:34.063 "utilization": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 1, 00:30:34.063 "state": "OPEN", 00:30:34.063 "utilization": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 2, 00:30:34.063 "state": "OPEN", 00:30:34.063 "utilization": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 3, 00:30:34.063 "state": "FREE", 00:30:34.063 "utilization": 0.0 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "id": 4, 00:30:34.063 "state": "FREE", 00:30:34.063 "utilization": 0.0 00:30:34.063 } 00:30:34.063 ], 00:30:34.063 "read-only": true 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "name": "verbose_mode", 00:30:34.063 "value": true, 00:30:34.063 "unit": "", 00:30:34.063 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:34.063 }, 00:30:34.063 { 00:30:34.063 "name": "prep_upgrade_on_shutdown", 00:30:34.063 "value": false, 00:30:34.063 "unit": "", 00:30:34.063 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:34.063 } 00:30:34.063 ] 00:30:34.063 } 00:30:34.063 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:34.063 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:34.063 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:34.322 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:34.322 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:34.322 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:34.322 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:34.322 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:34.581 Validate MD5 checksum, iteration 1 00:30:34.581 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:34.581 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:34.581 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:34.581 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:34.581 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:34.581 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:34.581 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:34.581 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:34.581 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:34.581 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:34.581 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:34.581 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:34.581 08:49:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:34.581 [2024-11-19 08:49:13.875019] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:34.581 [2024-11-19 08:49:13.875386] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81842 ] 00:30:34.840 [2024-11-19 08:49:14.051286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.099 [2024-11-19 08:49:14.157809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.477  [2024-11-19T08:49:17.148Z] Copying: 455/1024 [MB] (455 MBps) [2024-11-19T08:49:17.149Z] Copying: 903/1024 [MB] (448 MBps) [2024-11-19T08:49:18.525Z] Copying: 1024/1024 [MB] (average 453 MBps) 00:30:39.229 00:30:39.229 08:49:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:39.229 08:49:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:41.758 08:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:41.758 Validate MD5 checksum, iteration 2 00:30:41.758 08:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c654c2b87f819d89f362c5f0e9fc0f78 00:30:41.758 08:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c654c2b87f819d89f362c5f0e9fc0f78 != \c\6\5\4\c\2\b\8\7\f\8\1\9\d\8\9\f\3\6\2\c\5\f\0\e\9\f\c\0\f\7\8 ]] 00:30:41.758 08:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:41.758 08:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:41.758 08:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:41.758 08:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:41.758 08:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:41.758 08:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:41.758 08:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:41.758 08:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:41.758 08:49:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:41.758 [2024-11-19 08:49:20.700046] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:41.758 [2024-11-19 08:49:20.700385] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81915 ] 00:30:41.758 [2024-11-19 08:49:20.879914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.758 [2024-11-19 08:49:20.989763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.664  [2024-11-19T08:49:23.896Z] Copying: 445/1024 [MB] (445 MBps) [2024-11-19T08:49:24.155Z] Copying: 881/1024 [MB] (436 MBps) [2024-11-19T08:49:25.114Z] Copying: 1024/1024 [MB] (average 441 MBps) 00:30:45.818 00:30:45.818 08:49:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:45.818 08:49:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d1c97df0afbd369d9f0469c5f2604363 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d1c97df0afbd369d9f0469c5f2604363 != \d\1\c\9\7\d\f\0\a\f\b\d\3\6\9\d\9\f\0\4\6\9\c\5\f\2\6\0\4\3\6\3 ]] 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81781 ]] 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81781 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81988 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:48.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81988 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81988 ']' 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.350 08:49:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:48.350 [2024-11-19 08:49:27.347222] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:48.350 [2024-11-19 08:49:27.347382] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81988 ] 00:30:48.350 [2024-11-19 08:49:27.527697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 81781 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:48.610 [2024-11-19 08:49:27.652725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.549 [2024-11-19 08:49:28.512299] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:49.549 [2024-11-19 08:49:28.512385] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:49.549 [2024-11-19 08:49:28.664360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.549 [2024-11-19 08:49:28.664652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:49.549 [2024-11-19 08:49:28.664685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:49.549 [2024-11-19 08:49:28.664698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.549 [2024-11-19 08:49:28.664777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.549 [2024-11-19 08:49:28.664796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:49.549 [2024-11-19 08:49:28.664809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:30:49.549 [2024-11-19 08:49:28.664819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.549 [2024-11-19 08:49:28.664860] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:49.549 [2024-11-19 08:49:28.665780] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:49.549 [2024-11-19 08:49:28.665813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.549 [2024-11-19 08:49:28.665826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:49.549 [2024-11-19 08:49:28.665837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.968 ms 00:30:49.549 [2024-11-19 08:49:28.665847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.549 [2024-11-19 08:49:28.666444] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:49.549 [2024-11-19 08:49:28.684829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.549 [2024-11-19 08:49:28.684871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:49.549 [2024-11-19 08:49:28.684903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.386 ms 00:30:49.549 [2024-11-19 08:49:28.684913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.549 [2024-11-19 08:49:28.695464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.549 [2024-11-19 08:49:28.695533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:49.549 [2024-11-19 08:49:28.695584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:49.549 [2024-11-19 08:49:28.695594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.549 [2024-11-19 08:49:28.696208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.549 [2024-11-19 08:49:28.696243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:49.549 [2024-11-19 08:49:28.696259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.481 ms 00:30:49.549 [2024-11-19 08:49:28.696271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.549 [2024-11-19 08:49:28.696354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.549 [2024-11-19 08:49:28.696376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:49.549 [2024-11-19 08:49:28.696388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:30:49.549 [2024-11-19 08:49:28.696399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.549 [2024-11-19 08:49:28.696434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.549 [2024-11-19 08:49:28.696449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:49.549 [2024-11-19 08:49:28.696461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:49.549 [2024-11-19 08:49:28.696486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.549 [2024-11-19 08:49:28.696518] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:49.549 [2024-11-19 08:49:28.700914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.549 [2024-11-19 08:49:28.701098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:49.549 [2024-11-19 08:49:28.701258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.403 ms 00:30:49.549 [2024-11-19 08:49:28.701321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.549 [2024-11-19 08:49:28.701546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.549 [2024-11-19 08:49:28.701599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:49.549 [2024-11-19 08:49:28.701658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:49.549 [2024-11-19 08:49:28.701707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.549 [2024-11-19 08:49:28.701791] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:49.549 [2024-11-19 08:49:28.702012] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:49.549 [2024-11-19 08:49:28.702113] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:49.549 [2024-11-19 08:49:28.702309] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:49.549 [2024-11-19 08:49:28.702502] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:49.550 [2024-11-19 08:49:28.702765] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:49.550 [2024-11-19 08:49:28.702847] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:49.550 [2024-11-19 08:49:28.702978] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:49.550 [2024-11-19 08:49:28.703051] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:49.550 [2024-11-19 08:49:28.703112] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:49.550 [2024-11-19 08:49:28.703229] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:49.550 [2024-11-19 08:49:28.703268] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:49.550 [2024-11-19 08:49:28.703367] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:49.550 [2024-11-19 08:49:28.703435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.550 [2024-11-19 08:49:28.703524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:49.550 [2024-11-19 08:49:28.703570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.647 ms 00:30:49.550 [2024-11-19 08:49:28.703608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.550 [2024-11-19 08:49:28.703802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.550 [2024-11-19 08:49:28.703993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:49.550 [2024-11-19 08:49:28.704047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.112 ms 00:30:49.550 [2024-11-19 08:49:28.704089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.550 [2024-11-19 08:49:28.704245] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:49.550 [2024-11-19 08:49:28.704299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:49.550 [2024-11-19 08:49:28.704351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:49.550 [2024-11-19 08:49:28.704408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.550 [2024-11-19 08:49:28.704558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:49.550 [2024-11-19 08:49:28.704622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:49.550 [2024-11-19 08:49:28.704718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:49.550 [2024-11-19 08:49:28.704805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:49.550 [2024-11-19 08:49:28.704903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:49.550 [2024-11-19 08:49:28.704955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.550 [2024-11-19 08:49:28.704995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:49.550 [2024-11-19 08:49:28.705201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:49.550 [2024-11-19 08:49:28.705254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.550 [2024-11-19 08:49:28.705299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:49.550 [2024-11-19 08:49:28.705454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:49.550 [2024-11-19 08:49:28.705521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.550 [2024-11-19 08:49:28.705562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:49.550 [2024-11-19 08:49:28.705768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:49.550 [2024-11-19 08:49:28.705823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.550 [2024-11-19 08:49:28.705867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:49.550 [2024-11-19 08:49:28.705961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:49.550 [2024-11-19 08:49:28.706017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:49.550 [2024-11-19 08:49:28.706055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:49.550 [2024-11-19 08:49:28.706113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:49.550 [2024-11-19 08:49:28.706169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:49.550 [2024-11-19 08:49:28.706253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:49.550 [2024-11-19 08:49:28.706308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:49.550 [2024-11-19 08:49:28.706346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:49.550 [2024-11-19 08:49:28.706363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:49.550 [2024-11-19 08:49:28.706375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:49.550 [2024-11-19 08:49:28.706385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:49.550 [2024-11-19 08:49:28.706395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:49.550 [2024-11-19 08:49:28.706405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:49.550 [2024-11-19 08:49:28.706414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.550 [2024-11-19 08:49:28.706425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:49.550 [2024-11-19 08:49:28.706435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:49.550 [2024-11-19 08:49:28.706445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.550 [2024-11-19 08:49:28.706455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:49.550 [2024-11-19 08:49:28.706480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:49.550 [2024-11-19 08:49:28.706490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.550 [2024-11-19 08:49:28.706500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:49.550 [2024-11-19 08:49:28.706509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:49.550 [2024-11-19 08:49:28.706519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.550 [2024-11-19 08:49:28.706528] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:49.550 [2024-11-19 08:49:28.706539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:49.550 [2024-11-19 08:49:28.706550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:49.550 [2024-11-19 08:49:28.706560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:49.550 [2024-11-19 08:49:28.706571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:49.550 [2024-11-19 08:49:28.706581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:49.550 [2024-11-19 08:49:28.706591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:49.550 [2024-11-19 08:49:28.706601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:49.550 [2024-11-19 08:49:28.706610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:49.550 [2024-11-19 08:49:28.706620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:49.550 [2024-11-19 08:49:28.706632] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:49.550 [2024-11-19 08:49:28.706646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:49.550 [2024-11-19 08:49:28.706658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:49.550 [2024-11-19 08:49:28.706709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:49.550 [2024-11-19 08:49:28.706723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:49.550 [2024-11-19 08:49:28.706734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:49.550 [2024-11-19 08:49:28.706745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:49.550 [2024-11-19 08:49:28.706757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:49.550 [2024-11-19 08:49:28.706770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:49.550 [2024-11-19 08:49:28.706781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:49.550 [2024-11-19 08:49:28.706792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:49.550 [2024-11-19 08:49:28.706803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:49.550 [2024-11-19 08:49:28.706814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:49.550 [2024-11-19 08:49:28.706839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:49.550 [2024-11-19 08:49:28.706850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:49.550 [2024-11-19 08:49:28.706861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:49.550 [2024-11-19 08:49:28.706872] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:49.550 [2024-11-19 08:49:28.706884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:49.550 [2024-11-19 08:49:28.706896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:49.550 [2024-11-19 08:49:28.706906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:49.550 [2024-11-19 08:49:28.706917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:49.550 [2024-11-19 08:49:28.706927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:49.550 [2024-11-19 08:49:28.706940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.550 [2024-11-19 08:49:28.706958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:49.550 [2024-11-19 08:49:28.706970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.767 ms 00:30:49.550 [2024-11-19 08:49:28.706980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.550 [2024-11-19 08:49:28.736995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.550 [2024-11-19 08:49:28.737050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:49.550 [2024-11-19 08:49:28.737085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.883 ms 00:30:49.551 [2024-11-19 08:49:28.737096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.551 [2024-11-19 08:49:28.737191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.551 [2024-11-19 08:49:28.737207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:49.551 [2024-11-19 08:49:28.737219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:49.551 [2024-11-19 08:49:28.737229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.551 [2024-11-19 08:49:28.772907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.551 [2024-11-19 08:49:28.772970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:49.551 [2024-11-19 08:49:28.773021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.601 ms 00:30:49.551 [2024-11-19 08:49:28.773032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.551 [2024-11-19 08:49:28.773135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.551 [2024-11-19 08:49:28.773151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:49.551 [2024-11-19 08:49:28.773163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:49.551 [2024-11-19 08:49:28.773174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.551 [2024-11-19 08:49:28.773349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.551 [2024-11-19 08:49:28.773367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:49.551 [2024-11-19 08:49:28.773380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:49.551 [2024-11-19 08:49:28.773391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.551 [2024-11-19 08:49:28.773444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.551 [2024-11-19 08:49:28.773459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:49.551 [2024-11-19 08:49:28.773486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:49.551 [2024-11-19 08:49:28.773513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.551 [2024-11-19 08:49:28.790559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.551 [2024-11-19 08:49:28.790639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:49.551 [2024-11-19 08:49:28.790674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.995 ms 00:30:49.551 [2024-11-19 08:49:28.790686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.551 [2024-11-19 08:49:28.790863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.551 [2024-11-19 08:49:28.790884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:49.551 [2024-11-19 08:49:28.790897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:49.551 [2024-11-19 08:49:28.790907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.551 [2024-11-19 08:49:28.821721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.551 [2024-11-19 08:49:28.821764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:49.551 [2024-11-19 08:49:28.821797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.726 ms 00:30:49.551 [2024-11-19 08:49:28.821808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.551 [2024-11-19 08:49:28.832880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.551 [2024-11-19 08:49:28.832918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:49.551 [2024-11-19 08:49:28.832957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.563 ms 00:30:49.551 [2024-11-19 08:49:28.832967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.810 [2024-11-19 08:49:28.898803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.810 [2024-11-19 08:49:28.898875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:49.810 [2024-11-19 08:49:28.898918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 65.764 ms 00:30:49.810 [2024-11-19 08:49:28.898931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.810 [2024-11-19 08:49:28.899199] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:49.810 [2024-11-19 08:49:28.899347] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:49.811 [2024-11-19 08:49:28.899478] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:49.811 [2024-11-19 08:49:28.899598] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:49.811 [2024-11-19 08:49:28.899612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.811 [2024-11-19 08:49:28.899623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:49.811 [2024-11-19 08:49:28.899657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.579 ms 00:30:49.811 [2024-11-19 08:49:28.899672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.811 [2024-11-19 08:49:28.899859] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:49.811 [2024-11-19 08:49:28.899882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.811 [2024-11-19 08:49:28.899898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:49.811 [2024-11-19 08:49:28.899912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:49.811 [2024-11-19 08:49:28.899923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.811 [2024-11-19 08:49:28.917738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.811 [2024-11-19 08:49:28.917782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:49.811 [2024-11-19 08:49:28.917799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.765 ms 00:30:49.811 [2024-11-19 08:49:28.917810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.811 [2024-11-19 08:49:28.928457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.811 [2024-11-19 08:49:28.928695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:49.811 [2024-11-19 08:49:28.928723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:49.811 [2024-11-19 08:49:28.928738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:49.811 [2024-11-19 08:49:28.928854] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:49.811 [2024-11-19 08:49:28.929013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:49.811 [2024-11-19 08:49:28.929031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:49.811 [2024-11-19 08:49:28.929043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.162 ms 00:30:49.811 [2024-11-19 08:49:28.929054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.378 [2024-11-19 08:49:29.548916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.378 [2024-11-19 08:49:29.549334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:50.378 [2024-11-19 08:49:29.549368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 618.801 ms 00:30:50.378 [2024-11-19 08:49:29.549383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.378 [2024-11-19 08:49:29.554339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.378 [2024-11-19 08:49:29.554386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:50.378 [2024-11-19 08:49:29.554420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.081 ms 00:30:50.378 [2024-11-19 08:49:29.554432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.378 [2024-11-19 08:49:29.554940] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:50.378 [2024-11-19 08:49:29.554978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.378 [2024-11-19 08:49:29.554992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:50.378 [2024-11-19 08:49:29.555006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.479 ms 00:30:50.378 [2024-11-19 08:49:29.555017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.378 [2024-11-19 08:49:29.555064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.378 [2024-11-19 08:49:29.555083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:50.378 [2024-11-19 08:49:29.555096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:50.378 [2024-11-19 08:49:29.555122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.378 [2024-11-19 08:49:29.555223] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 626.340 ms, result 0 00:30:50.378 [2024-11-19 08:49:29.555271] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:50.378 [2024-11-19 08:49:29.555354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.378 [2024-11-19 08:49:29.555367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:50.378 [2024-11-19 08:49:29.555378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.085 ms 00:30:50.378 [2024-11-19 08:49:29.555388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.141643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.141733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:50.947 [2024-11-19 08:49:30.141754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 585.067 ms 00:30:50.947 [2024-11-19 08:49:30.141765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.146450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.146681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:50.947 [2024-11-19 08:49:30.146741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.001 ms 00:30:50.947 [2024-11-19 08:49:30.146754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.147163] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:50.947 [2024-11-19 08:49:30.147200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.147214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:50.947 [2024-11-19 08:49:30.147227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.395 ms 00:30:50.947 [2024-11-19 08:49:30.147238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.147284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.147302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:50.947 [2024-11-19 08:49:30.147330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:50.947 [2024-11-19 08:49:30.147340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.147389] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 592.115 ms, result 0 00:30:50.947 [2024-11-19 08:49:30.147504] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:50.947 [2024-11-19 08:49:30.147522] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:50.947 [2024-11-19 08:49:30.147536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.147549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:50.947 [2024-11-19 08:49:30.147561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1218.714 ms 00:30:50.947 [2024-11-19 08:49:30.147572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.147613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.147629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:50.947 [2024-11-19 08:49:30.147647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:50.947 [2024-11-19 08:49:30.147674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.159179] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:50.947 [2024-11-19 08:49:30.159375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.159394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:50.947 [2024-11-19 08:49:30.159407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.662 ms 00:30:50.947 [2024-11-19 08:49:30.159419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.160227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.160265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:50.947 [2024-11-19 08:49:30.160286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.688 ms 00:30:50.947 [2024-11-19 08:49:30.160298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.162764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.162791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:50.947 [2024-11-19 08:49:30.162820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.439 ms 00:30:50.947 [2024-11-19 08:49:30.162831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.162877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.162892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:50.947 [2024-11-19 08:49:30.162903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:50.947 [2024-11-19 08:49:30.162918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.163031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.163047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:50.947 [2024-11-19 08:49:30.163058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:50.947 [2024-11-19 08:49:30.163068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.163094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.163106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:50.947 [2024-11-19 08:49:30.163117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:50.947 [2024-11-19 08:49:30.163127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.163165] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:50.947 [2024-11-19 08:49:30.163184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.163195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:50.947 [2024-11-19 08:49:30.163206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:50.947 [2024-11-19 08:49:30.163216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.163275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:50.947 [2024-11-19 08:49:30.163290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:50.947 [2024-11-19 08:49:30.163301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:30:50.947 [2024-11-19 08:49:30.163311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:50.947 [2024-11-19 08:49:30.164554] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1499.663 ms, result 0 00:30:50.947 [2024-11-19 08:49:30.179814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.947 [2024-11-19 08:49:30.195864] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:50.947 [2024-11-19 08:49:30.204766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:51.206 Validate MD5 checksum, iteration 1 00:30:51.206 08:49:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.206 08:49:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:51.206 08:49:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:51.206 08:49:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:51.206 08:49:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:51.206 08:49:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:51.206 08:49:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:51.206 08:49:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:51.206 08:49:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:51.207 08:49:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:51.207 08:49:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:51.207 08:49:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:51.207 08:49:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:51.207 08:49:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:51.207 08:49:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:51.207 [2024-11-19 08:49:30.345400] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:51.207 [2024-11-19 08:49:30.345859] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82018 ] 00:30:51.465 [2024-11-19 08:49:30.530279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.465 [2024-11-19 08:49:30.653946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.370  [2024-11-19T08:49:33.604Z] Copying: 472/1024 [MB] (472 MBps) [2024-11-19T08:49:33.604Z] Copying: 899/1024 [MB] (427 MBps) [2024-11-19T08:49:36.892Z] Copying: 1024/1024 [MB] (average 453 MBps) 00:30:57.596 00:30:57.596 08:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:57.596 08:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:59.503 08:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:59.503 Validate MD5 checksum, iteration 2 00:30:59.503 08:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c654c2b87f819d89f362c5f0e9fc0f78 00:30:59.503 08:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c654c2b87f819d89f362c5f0e9fc0f78 != \c\6\5\4\c\2\b\8\7\f\8\1\9\d\8\9\f\3\6\2\c\5\f\0\e\9\f\c\0\f\7\8 ]] 00:30:59.503 08:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:59.503 08:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:59.503 08:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:59.503 08:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:59.503 08:49:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:59.503 08:49:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:59.503 08:49:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:59.503 08:49:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:59.503 08:49:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:59.503 [2024-11-19 08:49:38.688491] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:30:59.503 [2024-11-19 08:49:38.688701] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82107 ] 00:30:59.763 [2024-11-19 08:49:38.873045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.763 [2024-11-19 08:49:38.997715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.672  [2024-11-19T08:49:41.904Z] Copying: 465/1024 [MB] (465 MBps) [2024-11-19T08:49:41.904Z] Copying: 930/1024 [MB] (465 MBps) [2024-11-19T08:49:43.808Z] Copying: 1024/1024 [MB] (average 462 MBps) 00:31:04.512 00:31:04.512 08:49:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:04.512 08:49:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d1c97df0afbd369d9f0469c5f2604363 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d1c97df0afbd369d9f0469c5f2604363 != \d\1\c\9\7\d\f\0\a\f\b\d\3\6\9\d\9\f\0\4\6\9\c\5\f\2\6\0\4\3\6\3 ]] 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81988 ]] 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81988 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81988 ']' 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81988 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81988 00:31:07.046 killing process with pid 81988 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81988' 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81988 00:31:07.046 08:49:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81988 00:31:07.615 [2024-11-19 08:49:46.778053] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:07.615 [2024-11-19 08:49:46.794010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.794211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:07.615 [2024-11-19 08:49:46.794347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:07.615 [2024-11-19 08:49:46.794400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.794464] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:07.615 [2024-11-19 08:49:46.797660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.797693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:07.615 [2024-11-19 08:49:46.797724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.946 ms 00:31:07.615 [2024-11-19 08:49:46.797741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.797938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.797954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:07.615 [2024-11-19 08:49:46.797965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.171 ms 00:31:07.615 [2024-11-19 08:49:46.797975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.799120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.799159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:07.615 [2024-11-19 08:49:46.799190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.126 ms 00:31:07.615 [2024-11-19 08:49:46.799201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.800476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.800523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:07.615 [2024-11-19 08:49:46.800552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.227 ms 00:31:07.615 [2024-11-19 08:49:46.800563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.811582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.811842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:07.615 [2024-11-19 08:49:46.811874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.963 ms 00:31:07.615 [2024-11-19 08:49:46.811896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.818705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.818745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:07.615 [2024-11-19 08:49:46.818762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.744 ms 00:31:07.615 [2024-11-19 08:49:46.818772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.818852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.818871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:07.615 [2024-11-19 08:49:46.818882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:31:07.615 [2024-11-19 08:49:46.818893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.830466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.830710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:07.615 [2024-11-19 08:49:46.830739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.545 ms 00:31:07.615 [2024-11-19 08:49:46.830751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.842669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.842722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:07.615 [2024-11-19 08:49:46.842738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.869 ms 00:31:07.615 [2024-11-19 08:49:46.842748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.854803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.854840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:07.615 [2024-11-19 08:49:46.854856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.016 ms 00:31:07.615 [2024-11-19 08:49:46.854866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.866483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.866696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:07.615 [2024-11-19 08:49:46.866723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.546 ms 00:31:07.615 [2024-11-19 08:49:46.866734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.866780] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:07.615 [2024-11-19 08:49:46.866802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:07.615 [2024-11-19 08:49:46.866815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:07.615 [2024-11-19 08:49:46.866826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:07.615 [2024-11-19 08:49:46.866837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.866987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:07.615 [2024-11-19 08:49:46.867000] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:07.615 [2024-11-19 08:49:46.867010] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: dbcbb735-53c4-4435-8bd6-6f167c6b6c26 00:31:07.615 [2024-11-19 08:49:46.867036] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:07.615 [2024-11-19 08:49:46.867045] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:07.615 [2024-11-19 08:49:46.867055] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:07.615 [2024-11-19 08:49:46.867065] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:07.615 [2024-11-19 08:49:46.867074] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:07.615 [2024-11-19 08:49:46.867084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:07.615 [2024-11-19 08:49:46.867094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:07.615 [2024-11-19 08:49:46.867104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:07.615 [2024-11-19 08:49:46.867113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:07.615 [2024-11-19 08:49:46.867123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.867141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:07.615 [2024-11-19 08:49:46.867153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.344 ms 00:31:07.615 [2024-11-19 08:49:46.867162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.881927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.882113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:07.615 [2024-11-19 08:49:46.882141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.742 ms 00:31:07.615 [2024-11-19 08:49:46.882153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.615 [2024-11-19 08:49:46.882544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.615 [2024-11-19 08:49:46.882565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:07.615 [2024-11-19 08:49:46.882577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.364 ms 00:31:07.615 [2024-11-19 08:49:46.882588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.875 [2024-11-19 08:49:46.932131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:07.875 [2024-11-19 08:49:46.932189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:07.875 [2024-11-19 08:49:46.932204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:07.875 [2024-11-19 08:49:46.932214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.875 [2024-11-19 08:49:46.932261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:07.875 [2024-11-19 08:49:46.932273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:07.875 [2024-11-19 08:49:46.932284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:07.875 [2024-11-19 08:49:46.932293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.875 [2024-11-19 08:49:46.932399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:07.875 [2024-11-19 08:49:46.932418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:07.875 [2024-11-19 08:49:46.932429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:07.875 [2024-11-19 08:49:46.932439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.875 [2024-11-19 08:49:46.932461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:07.875 [2024-11-19 08:49:46.932479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:07.875 [2024-11-19 08:49:46.932490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:07.875 [2024-11-19 08:49:46.932500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.875 [2024-11-19 08:49:47.022169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:07.875 [2024-11-19 08:49:47.022389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:07.875 [2024-11-19 08:49:47.022420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:07.875 [2024-11-19 08:49:47.022433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.875 [2024-11-19 08:49:47.096591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:07.875 [2024-11-19 08:49:47.096872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:07.875 [2024-11-19 08:49:47.096903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:07.875 [2024-11-19 08:49:47.096917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.875 [2024-11-19 08:49:47.097041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:07.875 [2024-11-19 08:49:47.097061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:07.875 [2024-11-19 08:49:47.097073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:07.875 [2024-11-19 08:49:47.097084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.875 [2024-11-19 08:49:47.097158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:07.875 [2024-11-19 08:49:47.097175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:07.875 [2024-11-19 08:49:47.097209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:07.875 [2024-11-19 08:49:47.097232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.875 [2024-11-19 08:49:47.097358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:07.875 [2024-11-19 08:49:47.097377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:07.875 [2024-11-19 08:49:47.097389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:07.875 [2024-11-19 08:49:47.097399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.875 [2024-11-19 08:49:47.097448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:07.875 [2024-11-19 08:49:47.097480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:07.875 [2024-11-19 08:49:47.097506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:07.875 [2024-11-19 08:49:47.097521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.875 [2024-11-19 08:49:47.097563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:07.875 [2024-11-19 08:49:47.097576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:07.875 [2024-11-19 08:49:47.097586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:07.875 [2024-11-19 08:49:47.097596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.875 [2024-11-19 08:49:47.097644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:07.875 [2024-11-19 08:49:47.097659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:07.875 [2024-11-19 08:49:47.097675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:07.875 [2024-11-19 08:49:47.097704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.875 [2024-11-19 08:49:47.097840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 303.794 ms, result 0 00:31:08.809 08:49:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:08.809 08:49:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:09.069 08:49:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:09.069 08:49:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:09.069 08:49:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:09.069 08:49:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:09.069 Remove shared memory files 00:31:09.069 08:49:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:09.069 08:49:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:09.069 08:49:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:09.069 08:49:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:09.069 08:49:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81781 00:31:09.069 08:49:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:09.069 08:49:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:09.069 ************************************ 00:31:09.069 END TEST ftl_upgrade_shutdown 00:31:09.069 ************************************ 00:31:09.069 00:31:09.069 real 1m32.669s 00:31:09.069 user 2m12.781s 00:31:09.069 sys 0m22.515s 00:31:09.069 08:49:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:09.069 08:49:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:09.069 08:49:48 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:31:09.069 08:49:48 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:09.069 08:49:48 ftl -- ftl/ftl.sh@14 -- # killprocess 74319 00:31:09.069 08:49:48 ftl -- common/autotest_common.sh@954 -- # '[' -z 74319 ']' 00:31:09.069 Process with pid 74319 is not found 00:31:09.069 08:49:48 ftl -- common/autotest_common.sh@958 -- # kill -0 74319 00:31:09.069 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74319) - No such process 00:31:09.069 08:49:48 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74319 is not found' 00:31:09.069 08:49:48 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:09.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.069 08:49:48 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82232 00:31:09.069 08:49:48 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82232 00:31:09.069 08:49:48 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:09.069 08:49:48 ftl -- common/autotest_common.sh@835 -- # '[' -z 82232 ']' 00:31:09.069 08:49:48 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.069 08:49:48 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:09.069 08:49:48 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.069 08:49:48 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:09.069 08:49:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:09.069 [2024-11-19 08:49:48.301748] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 24.03.0 initialization... 00:31:09.069 [2024-11-19 08:49:48.302126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82232 ] 00:31:09.328 [2024-11-19 08:49:48.486323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.328 [2024-11-19 08:49:48.608768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.265 08:49:49 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:10.265 08:49:49 ftl -- common/autotest_common.sh@868 -- # return 0 00:31:10.265 08:49:49 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:10.525 nvme0n1 00:31:10.525 08:49:49 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:10.525 08:49:49 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:10.525 08:49:49 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:10.783 08:49:49 ftl -- ftl/common.sh@28 -- # stores=68c34f75-ba66-44c8-bbd3-00b23bd8b04b 00:31:10.783 08:49:49 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:10.783 08:49:49 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 68c34f75-ba66-44c8-bbd3-00b23bd8b04b 00:31:11.043 08:49:50 ftl -- ftl/ftl.sh@23 -- # killprocess 82232 00:31:11.043 08:49:50 ftl -- common/autotest_common.sh@954 -- # '[' -z 82232 ']' 00:31:11.043 08:49:50 ftl -- common/autotest_common.sh@958 -- # kill -0 82232 00:31:11.043 08:49:50 ftl -- common/autotest_common.sh@959 -- # uname 00:31:11.043 08:49:50 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:11.043 08:49:50 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82232 00:31:11.043 08:49:50 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:11.043 08:49:50 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:11.043 killing process with pid 82232 00:31:11.043 08:49:50 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82232' 00:31:11.043 08:49:50 ftl -- common/autotest_common.sh@973 -- # kill 82232 00:31:11.043 08:49:50 ftl -- common/autotest_common.sh@978 -- # wait 82232 00:31:13.581 08:49:52 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:13.581 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:13.581 Waiting for block devices as requested 00:31:13.581 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:13.581 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:13.581 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:13.581 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:18.852 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:18.853 08:49:57 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:18.853 Remove shared memory files 00:31:18.853 08:49:57 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:18.853 08:49:57 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:18.853 08:49:57 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:18.853 08:49:57 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:18.853 08:49:57 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:18.853 08:49:57 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:18.853 ************************************ 00:31:18.853 END TEST ftl 00:31:18.853 ************************************ 00:31:18.853 00:31:18.853 real 11m54.215s 00:31:18.853 user 14m57.489s 00:31:18.853 sys 1m29.500s 00:31:18.853 08:49:57 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:18.853 08:49:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:18.853 08:49:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:18.853 08:49:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:18.853 08:49:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:18.853 08:49:57 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:18.853 08:49:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:18.853 08:49:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:18.853 08:49:57 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:18.853 08:49:57 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:18.853 08:49:57 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:18.853 08:49:57 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:18.853 08:49:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:18.853 08:49:57 -- common/autotest_common.sh@10 -- # set +x 00:31:18.853 08:49:57 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:18.853 08:49:57 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:18.853 08:49:57 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:18.853 08:49:57 -- common/autotest_common.sh@10 -- # set +x 00:31:20.301 INFO: APP EXITING 00:31:20.301 INFO: killing all VMs 00:31:20.301 INFO: killing vhost app 00:31:20.301 INFO: EXIT DONE 00:31:20.870 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:21.129 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:21.129 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:21.129 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:21.129 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:21.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:21.956 Cleaning 00:31:21.956 Removing: /var/run/dpdk/spdk0/config 00:31:21.956 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:21.956 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:21.956 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:21.956 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:21.956 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:21.956 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:21.956 Removing: /var/run/dpdk/spdk0 00:31:21.956 Removing: /var/run/dpdk/spdk_pid58060 00:31:21.956 Removing: /var/run/dpdk/spdk_pid58284 00:31:21.956 Removing: /var/run/dpdk/spdk_pid58513 00:31:21.956 Removing: /var/run/dpdk/spdk_pid58617 00:31:21.956 Removing: /var/run/dpdk/spdk_pid58667 00:31:21.956 Removing: /var/run/dpdk/spdk_pid58801 00:31:21.956 Removing: /var/run/dpdk/spdk_pid58819 00:31:21.956 Removing: /var/run/dpdk/spdk_pid59018 00:31:21.956 Removing: /var/run/dpdk/spdk_pid59123 00:31:21.956 Removing: /var/run/dpdk/spdk_pid59232 00:31:21.956 Removing: /var/run/dpdk/spdk_pid59354 00:31:21.956 Removing: /var/run/dpdk/spdk_pid59458 00:31:21.956 Removing: /var/run/dpdk/spdk_pid59499 00:31:21.956 Removing: /var/run/dpdk/spdk_pid59542 00:31:21.956 Removing: /var/run/dpdk/spdk_pid59614 00:31:21.956 Removing: /var/run/dpdk/spdk_pid59709 00:31:21.956 Removing: /var/run/dpdk/spdk_pid60178 00:31:21.956 Removing: /var/run/dpdk/spdk_pid60253 00:31:21.956 Removing: /var/run/dpdk/spdk_pid60327 00:31:21.956 Removing: /var/run/dpdk/spdk_pid60343 00:31:21.956 Removing: /var/run/dpdk/spdk_pid60478 00:31:21.956 Removing: /var/run/dpdk/spdk_pid60494 00:31:21.956 Removing: /var/run/dpdk/spdk_pid60630 00:31:21.956 Removing: /var/run/dpdk/spdk_pid60646 00:31:21.956 Removing: /var/run/dpdk/spdk_pid60716 00:31:21.956 Removing: /var/run/dpdk/spdk_pid60734 00:31:21.956 Removing: /var/run/dpdk/spdk_pid60798 00:31:21.956 Removing: /var/run/dpdk/spdk_pid60816 00:31:21.956 Removing: /var/run/dpdk/spdk_pid60998 00:31:21.956 Removing: /var/run/dpdk/spdk_pid61040 00:31:21.956 Removing: /var/run/dpdk/spdk_pid61129 00:31:21.956 Removing: /var/run/dpdk/spdk_pid61312 00:31:21.956 Removing: /var/run/dpdk/spdk_pid61402 00:31:21.956 Removing: /var/run/dpdk/spdk_pid61444 00:31:21.956 Removing: /var/run/dpdk/spdk_pid61922 00:31:21.956 Removing: /var/run/dpdk/spdk_pid62026 00:31:21.956 Removing: /var/run/dpdk/spdk_pid62140 00:31:21.956 Removing: /var/run/dpdk/spdk_pid62199 00:31:21.956 Removing: /var/run/dpdk/spdk_pid62219 00:31:21.956 Removing: /var/run/dpdk/spdk_pid62303 00:31:21.956 Removing: /var/run/dpdk/spdk_pid62942 00:31:21.956 Removing: /var/run/dpdk/spdk_pid62983 00:31:21.956 Removing: /var/run/dpdk/spdk_pid63504 00:31:21.956 Removing: /var/run/dpdk/spdk_pid63608 00:31:22.216 Removing: /var/run/dpdk/spdk_pid63723 00:31:22.216 Removing: /var/run/dpdk/spdk_pid63776 00:31:22.216 Removing: /var/run/dpdk/spdk_pid63801 00:31:22.216 Removing: /var/run/dpdk/spdk_pid63831 00:31:22.216 Removing: /var/run/dpdk/spdk_pid65715 00:31:22.216 Removing: /var/run/dpdk/spdk_pid65853 00:31:22.216 Removing: /var/run/dpdk/spdk_pid65862 00:31:22.216 Removing: /var/run/dpdk/spdk_pid65880 00:31:22.216 Removing: /var/run/dpdk/spdk_pid65921 00:31:22.216 Removing: /var/run/dpdk/spdk_pid65925 00:31:22.216 Removing: /var/run/dpdk/spdk_pid65937 00:31:22.216 Removing: /var/run/dpdk/spdk_pid65986 00:31:22.216 Removing: /var/run/dpdk/spdk_pid65991 00:31:22.216 Removing: /var/run/dpdk/spdk_pid66003 00:31:22.216 Removing: /var/run/dpdk/spdk_pid66048 00:31:22.216 Removing: /var/run/dpdk/spdk_pid66052 00:31:22.216 Removing: /var/run/dpdk/spdk_pid66064 00:31:22.216 Removing: /var/run/dpdk/spdk_pid67450 00:31:22.216 Removing: /var/run/dpdk/spdk_pid67560 00:31:22.216 Removing: /var/run/dpdk/spdk_pid68981 00:31:22.216 Removing: /var/run/dpdk/spdk_pid70384 00:31:22.216 Removing: /var/run/dpdk/spdk_pid70511 00:31:22.216 Removing: /var/run/dpdk/spdk_pid70643 00:31:22.216 Removing: /var/run/dpdk/spdk_pid70763 00:31:22.216 Removing: /var/run/dpdk/spdk_pid70911 00:31:22.216 Removing: /var/run/dpdk/spdk_pid70992 00:31:22.216 Removing: /var/run/dpdk/spdk_pid71140 00:31:22.216 Removing: /var/run/dpdk/spdk_pid71508 00:31:22.216 Removing: /var/run/dpdk/spdk_pid71545 00:31:22.216 Removing: /var/run/dpdk/spdk_pid72027 00:31:22.216 Removing: /var/run/dpdk/spdk_pid72213 00:31:22.216 Removing: /var/run/dpdk/spdk_pid72318 00:31:22.216 Removing: /var/run/dpdk/spdk_pid72430 00:31:22.216 Removing: /var/run/dpdk/spdk_pid72480 00:31:22.216 Removing: /var/run/dpdk/spdk_pid72511 00:31:22.216 Removing: /var/run/dpdk/spdk_pid72805 00:31:22.216 Removing: /var/run/dpdk/spdk_pid72872 00:31:22.216 Removing: /var/run/dpdk/spdk_pid72959 00:31:22.216 Removing: /var/run/dpdk/spdk_pid73377 00:31:22.216 Removing: /var/run/dpdk/spdk_pid73523 00:31:22.216 Removing: /var/run/dpdk/spdk_pid74319 00:31:22.216 Removing: /var/run/dpdk/spdk_pid74468 00:31:22.216 Removing: /var/run/dpdk/spdk_pid74665 00:31:22.216 Removing: /var/run/dpdk/spdk_pid74778 00:31:22.216 Removing: /var/run/dpdk/spdk_pid75127 00:31:22.216 Removing: /var/run/dpdk/spdk_pid75403 00:31:22.216 Removing: /var/run/dpdk/spdk_pid75756 00:31:22.216 Removing: /var/run/dpdk/spdk_pid75955 00:31:22.216 Removing: /var/run/dpdk/spdk_pid76085 00:31:22.216 Removing: /var/run/dpdk/spdk_pid76149 00:31:22.216 Removing: /var/run/dpdk/spdk_pid76299 00:31:22.216 Removing: /var/run/dpdk/spdk_pid76330 00:31:22.216 Removing: /var/run/dpdk/spdk_pid76393 00:31:22.216 Removing: /var/run/dpdk/spdk_pid76599 00:31:22.216 Removing: /var/run/dpdk/spdk_pid76847 00:31:22.216 Removing: /var/run/dpdk/spdk_pid77274 00:31:22.216 Removing: /var/run/dpdk/spdk_pid77725 00:31:22.216 Removing: /var/run/dpdk/spdk_pid78169 00:31:22.216 Removing: /var/run/dpdk/spdk_pid78703 00:31:22.216 Removing: /var/run/dpdk/spdk_pid78841 00:31:22.216 Removing: /var/run/dpdk/spdk_pid78945 00:31:22.216 Removing: /var/run/dpdk/spdk_pid79644 00:31:22.216 Removing: /var/run/dpdk/spdk_pid79718 00:31:22.216 Removing: /var/run/dpdk/spdk_pid80201 00:31:22.216 Removing: /var/run/dpdk/spdk_pid80646 00:31:22.216 Removing: /var/run/dpdk/spdk_pid81192 00:31:22.216 Removing: /var/run/dpdk/spdk_pid81320 00:31:22.216 Removing: /var/run/dpdk/spdk_pid81367 00:31:22.216 Removing: /var/run/dpdk/spdk_pid81437 00:31:22.216 Removing: /var/run/dpdk/spdk_pid81494 00:31:22.216 Removing: /var/run/dpdk/spdk_pid81564 00:31:22.216 Removing: /var/run/dpdk/spdk_pid81781 00:31:22.216 Removing: /var/run/dpdk/spdk_pid81842 00:31:22.216 Removing: /var/run/dpdk/spdk_pid81915 00:31:22.216 Removing: /var/run/dpdk/spdk_pid81988 00:31:22.216 Removing: /var/run/dpdk/spdk_pid82018 00:31:22.216 Removing: /var/run/dpdk/spdk_pid82107 00:31:22.216 Removing: /var/run/dpdk/spdk_pid82232 00:31:22.216 Clean 00:31:22.474 08:50:01 -- common/autotest_common.sh@1453 -- # return 0 00:31:22.474 08:50:01 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:22.474 08:50:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:22.474 08:50:01 -- common/autotest_common.sh@10 -- # set +x 00:31:22.474 08:50:01 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:22.474 08:50:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:22.474 08:50:01 -- common/autotest_common.sh@10 -- # set +x 00:31:22.474 08:50:01 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:22.474 08:50:01 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:22.474 08:50:01 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:22.474 08:50:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:22.474 08:50:01 -- spdk/autotest.sh@398 -- # hostname 00:31:22.475 08:50:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:22.733 geninfo: WARNING: invalid characters removed from testname! 00:31:49.277 08:50:25 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:49.537 08:50:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:52.826 08:50:31 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:54.730 08:50:34 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:58.042 08:50:36 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:00.577 08:50:39 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:03.113 08:50:41 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:03.113 08:50:41 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:03.113 08:50:41 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:03.113 08:50:41 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:03.113 08:50:41 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:03.113 08:50:41 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:03.113 + [[ -n 5289 ]] 00:32:03.113 + sudo kill 5289 00:32:03.122 [Pipeline] } 00:32:03.136 [Pipeline] // timeout 00:32:03.141 [Pipeline] } 00:32:03.154 [Pipeline] // stage 00:32:03.158 [Pipeline] } 00:32:03.171 [Pipeline] // catchError 00:32:03.179 [Pipeline] stage 00:32:03.181 [Pipeline] { (Stop VM) 00:32:03.192 [Pipeline] sh 00:32:03.470 + vagrant halt 00:32:06.762 ==> default: Halting domain... 00:32:12.049 [Pipeline] sh 00:32:12.331 + vagrant destroy -f 00:32:14.865 ==> default: Removing domain... 00:32:15.446 [Pipeline] sh 00:32:15.727 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:32:15.736 [Pipeline] } 00:32:15.751 [Pipeline] // stage 00:32:15.757 [Pipeline] } 00:32:15.770 [Pipeline] // dir 00:32:15.775 [Pipeline] } 00:32:15.789 [Pipeline] // wrap 00:32:15.795 [Pipeline] } 00:32:15.807 [Pipeline] // catchError 00:32:15.817 [Pipeline] stage 00:32:15.819 [Pipeline] { (Epilogue) 00:32:15.832 [Pipeline] sh 00:32:16.114 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:21.393 [Pipeline] catchError 00:32:21.395 [Pipeline] { 00:32:21.407 [Pipeline] sh 00:32:21.689 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:21.689 Artifacts sizes are good 00:32:21.699 [Pipeline] } 00:32:21.714 [Pipeline] // catchError 00:32:21.724 [Pipeline] archiveArtifacts 00:32:21.731 Archiving artifacts 00:32:21.841 [Pipeline] cleanWs 00:32:21.852 [WS-CLEANUP] Deleting project workspace... 00:32:21.852 [WS-CLEANUP] Deferred wipeout is used... 00:32:21.858 [WS-CLEANUP] done 00:32:21.860 [Pipeline] } 00:32:21.875 [Pipeline] // stage 00:32:21.880 [Pipeline] } 00:32:21.892 [Pipeline] // node 00:32:21.898 [Pipeline] End of Pipeline 00:32:21.938 Finished: SUCCESS